question
stringlengths 15
150
| question_id
int64 258k
2.12M
| score
int64 1
208
| link
stringlengths 60
131
| body
stringlengths 101
9.72k
| answers
listlengths 1
288
| verbolised
stringlengths 650
683k
|
---|---|---|---|---|---|---|
Summation symbol with just one letter below it? How to verbally state. | 2,083,774 | 5 | https://math.stackexchange.com/questions/2083774/summation-symbol-with-just-one-letter-below-it-how-to-verbally-state | <p><a href="https://i.stack.imgur.com/Q2NSO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Q2NSO.png" alt="enter image description here"></a></p>
<p>I'm used to seeing summation capital sigmas with a "start" at bottom and a "stop" at top and would say the sum from i to 100 or whatever. But in this case, there is just a y at bottom. </p>
<p>How would I write this out so a screen reader would be able to read it aloud and get the correct idea across to the listener?</p>
| [
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
},
{
"answer_id": 2083776,
"score": 6,
"text": "<p>I would say \"sum of $f$ of $x$ and $y$ over all $y$.\" This is usually what is meant when no upper or lower limits are given. </p>\n"
},
{
"answer_id": 2083781,
"score": 4,
"text": "<p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \\in V$. The following terms are equivalent $$\\sum_{i \\in V}c_i \\quad \\text{and} \\quad \\sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>\n"
},
{
"answer_id": 2083792,
"score": 3,
"text": "<p>It depends on context. Sometimes it means \"sum over all possible values of $y$\" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href=\"https://en.wikipedia.org/wiki/Indefinite_sum\" rel=\"nofollow noreferrer\">indefinite sum</a> or \"anti-difference\", analogous to an indefinite integral or antiderivative:</p>\n\n<p>$$ \\sum_y f(x,y) = g(x,y)\\ \\text{where}\\ g(x,y+1) - g(x,y) = f(x,y) $$</p>\n"
}
] | Question: Summation symbol with just one letter below it? How to verbally state.
Answer 1: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 2: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 3: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 4: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 5: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 6: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 7: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 8: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 9: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 10: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 11: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 12: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 13: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 14: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 15: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 16: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 17: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 18: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 19: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 20: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 21: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 22: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 23: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 24: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
Answer 25: <p>I would say "sum of $f$ of $x$ and $y$ over all $y$." This is usually what is meant when no upper or lower limits are given. </p>
Answer 26: <p>It is an abbreviation of summing over a set, which is usually clear from the context. Example: In Linear Integer Programming we could have some set $V$ representing $n$ nodes and some value $c_i$ for each node $i \in V$. The following terms are equivalent $$\sum_{i \in V}c_i \quad \text{and} \quad \sum_i c_i.$$ The advantage is that is <strong>saves time</strong> but on the other hand it is lacks formallity.</p>
Answer 27: <p>It depends on context. Sometimes it means "sum over all possible values of $y$" (where which values of $y$ are allowed should be clear from the context), but sometimes it means an <a href="https://en.wikipedia.org/wiki/Indefinite_sum" rel="nofollow noreferrer">indefinite sum</a> or "anti-difference", analogous to an indefinite integral or antiderivative:</p>
<p>$$ \sum_y f(x,y) = g(x,y)\ \text{where}\ g(x,y+1) - g(x,y) = f(x,y) $$</p>
|
Show given property for a sequence $a_{n}$ | 2,083,855 | 5 | https://math.stackexchange.com/questions/2083855/show-given-property-for-a-sequence-a-n | <p>The sequence $a_n$ is defined as
$ a_0$ is an arbitrary real number,
$ a_{n+1}$ = $\lfloor a_{n}\rfloor$ ($a_{n} - \lfloor{a_{n}}\rfloor$)</p>
<p>Show that for every $ a_0$:</p>
<p>$$\exists m\geq0, \forall n \geq m, a_{n+2}= a_n$$</p>
<p>Floor function $\lfloor x \rfloor$, example, $\lfloor 3.2 \rfloor = 3$ and $\lfloor -3.2 \rfloor = -4$</p>
<p>Here is my attempt: <a href="https://i.stack.imgur.com/ewZbo.jpg" rel="nofollow noreferrer">[link]</a></p>
<p>What I have noticed is that due to the floor function denoted as $\lfloor x \rfloor$ all of these sequences will approach zero. I am not sure if this sequences has a divergent property of periodically switching between a few particular elements, but maybe.</p>
<p>Though why I think it approaches zero:</p>
<p>Let $ a_0= 16.2 \Rightarrow
a_1 = 16 (16.2 - 16) = 16 \cdot 0.2 = 3.2 \Rightarrow
a_2 = 3 (3.2 - 3) = 3 \cdot 0.2 = 0.6 \Rightarrow
a_3 = 0 (0.6 \cdot 0) = 0$</p>
| [
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
},
{
"answer_id": 2084011,
"score": 3,
"text": "<p>The case $a_n \\geq 0$ (i.e. $a_0 \\geq 0$) is straightforward as $a_{n+1} = \\lfloor a_n\\rfloor(a_n-\\lfloor a_n\\rfloor) \\leq \\lfloor a_n\\rfloor \\leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>\n\n<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \\delta$ where $\\delta \\in [0,1)$ and $m \\in\\mathbb{N}$. Depending on the value of $\\delta$ relative to $m$ we can have different behavior: </p>\n\n<ul>\n<li><p>If at some point $\\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\\delta = \\frac{1}{m+2}$ then\n$$a_{n+1} = -(m+1)\\left(1-\\frac{1}{m+2}\\right) = - m - \\frac{1}{m+2} = a_n$$\nand we have $a_n = -m-\\frac{1}{m+2}$ from that point onward. </p></li>\n<li><p>If $\\delta \\in \\left(\\frac{1}{m+2},1\\right)$ then $$|a_{n+1}| = |m+1|(1-\\delta) < m + \\delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>\n<li><p>If $0 < \\delta < \\frac{1}{m+2}$ then \n$$a_{n+1} = -(m+1)(1 - \\delta) = -m - \\delta'~~~\\text{where}~~~\\delta' = 1 - (m+1)\\delta~~\\text{so}~~~\\delta' \\in \\left(\\frac{1}{m+2},1\\right)$$\nand it follows that\n$$|a_{n+2}| = (m+1)^2\\delta < m+\\delta = |a_n|~~~\\text{since}~~~\\delta < \\frac{1}{m+2}$$\nso the sequence is smaller in absolute value after two steps.</p></li>\n<li><p>Finally if $-1 \\leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\\geq n$.</p></li>\n</ul>\n\n<p>Combinding the results above we can construct a subsequence $\\{a_{n_k}\\}_{k=1}^\\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \\leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>\n\n<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\\frac{1}{m+2}$ for some $m\\in\\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>\n\n<p>Let $e_n = a_n - \\left(-m-\\frac{1}{m+2}\\right)$ and assume that $e_n\\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \\epsilon = \\frac{1}{m+2}$ for $n\\geq N$. This choice of $\\epsilon$ guarantees that $\\lfloor a_n \\rfloor = -(m+1)$ for all $n\\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\\geq 1$. Taking $k\\to \\infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\\frac{1}{m+2}$ a case we already have covered.</p>\n\n<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>\n"
},
{
"answer_id": 2083877,
"score": 1,
"text": "<p>If your function is $a_{n+1} = \\lfloor a_n \\rfloor (a_n - \\lfloor a_n \\rfloor)$ (on the photo, but not the question)</p>\n\n<p>Then</p>\n\n<p>If $a_0>0$ </p>\n\n<p>$\\lfloor a_n \\rfloor < a_n$ and $ (a_n - \\lfloor a_n \\rfloor)<1$</p>\n\n<p>$0 \\le a_{n+1} < \\lfloor a_n\\rfloor] < a_n$ and $\\lfloor a_{n+1}\\rfloor \\le \\lfloor a_n \\rfloor - 1$</p>\n\n<p>The integer component is falling by at least 1 with every iteration.\neventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>\n\n<p>If $a_0 < 0$ the pattern is more complicated</p>\n\n<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>\n\n<p>if $1<a_n<0$ \n$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>\n\n<p>$a_{n+2} = a_n$</p>\n\n<p>And a stable oscillation emerges.</p>\n\n<p>$a_n < -1$ while it is possible that $|a_{n+1}| \\ge |a_n|$ but in these cases $\\lfloor a_{n+1} \\rfloor = \\lfloor a_{n} \\rfloor$ and $a_{n+1} - \\lfloor a_{n+1} \\rfloor < a_{n} - \\lfloor a_{n} \\rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>\n\n<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>\n"
}
] | Question: Show given property for a sequence $a_{n}$
Answer 1: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 2: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 3: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 4: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 5: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 6: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 7: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 8: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 9: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 10: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 11: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 12: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 13: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 14: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 15: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 16: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
Answer 17: <p>The case $a_n \geq 0$ (i.e. $a_0 \geq 0$) is straightforward as $a_{n+1} = \lfloor a_n\rfloor(a_n-\lfloor a_n\rfloor) \leq \lfloor a_n\rfloor \leq a_n$ so $a_n$ is decreasing and is bounded below by $0$ so by the monotone convergence theorem the sequence converges to the only possible (positive) limit-point $a_n = 0$. By the definition of convergence there is a $N$ such that $0 < a_n < 1$ for $n\geq N$. But then $a_{N+1} = 0$ and $a_n = 0$ for all $n > N$.</p>
<p>The tricky case is $a_0 < 0$. First note that we can write $a_n = -m - \delta$ where $\delta \in [0,1)$ and $m \in\mathbb{N}$. Depending on the value of $\delta$ relative to $m$ we can have different behavior: </p>
<ul>
<li><p>If at some point $\delta = 0$ then $a_n = 0$ from that point onward. Likewise if $\delta = \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)\left(1-\frac{1}{m+2}\right) = - m - \frac{1}{m+2} = a_n$$
and we have $a_n = -m-\frac{1}{m+2}$ from that point onward. </p></li>
<li><p>If $\delta \in \left(\frac{1}{m+2},1\right)$ then $$|a_{n+1}| = |m+1|(1-\delta) < m + \delta = |a_n|$$ so the sequence is smaller in absolute value at the next step.</p></li>
<li><p>If $0 < \delta < \frac{1}{m+2}$ then
$$a_{n+1} = -(m+1)(1 - \delta) = -m - \delta'~~~\text{where}~~~\delta' = 1 - (m+1)\delta~~\text{so}~~~\delta' \in \left(\frac{1}{m+2},1\right)$$
and it follows that
$$|a_{n+2}| = (m+1)^2\delta < m+\delta = |a_n|~~~\text{since}~~~\delta < \frac{1}{m+2}$$
so the sequence is smaller in absolute value after two steps.</p></li>
<li><p>Finally if $-1 \leq a_n < 0$ then $a_{n+1} = -1 - a_n$ and $a_{k+2} = a_k$ holds for all $k\geq n$.</p></li>
</ul>
<p>Combinding the results above we can construct a subsequence $\{a_{n_k}\}_{k=1}^\infty$ that is decreasing in absolute value. Take $n_1 = 0$ and for $k\geq 1$ we take $n_{k+1} = n_k + 1$ if $|a_{n_k+1}| \leq |a_{n_k}|$ and $n_{k+1} = n_k + 2$ otherwise. </p>
<p>If this subsequence at some point has $-1 < a_{n_k} < 0$ then it will start to alternate so $a_{n+2} = a_n$ holds for all $n\geq n_k$. If this does not happen then by the monotone convergence theorem this subsequence must converge. There are two options: first if $a_{n_k}$ ever becomes an integer then the sequence converges to $0$ on the next step (and it follows that $a_n = 0$ for all $n > n_k$). Otherwise the subsequence must converge to one of the fixpoints on the form $-m-\frac{1}{m+2}$ for some $m\in\mathbb{N}$. As we will show below this is impossible as these fixpoints are repelling. </p>
<p>Let $e_n = a_n - \left(-m-\frac{1}{m+2}\right)$ and assume that $e_n\to 0$ which guarantees the existence of a $N$ such that $|e_n| < \epsilon = \frac{1}{m+2}$ for $n\geq N$. This choice of $\epsilon$ guarantees that $\lfloor a_n \rfloor = -(m+1)$ for all $n\geq N$ and the recursion gives us $e_{n+1} = -(m+1)e_n$ and by induction $e_{n+k} = (-1)^k (m+1)^k e_n$ for all $k\geq 1$. Taking $k\to \infty$ we get a contradiction unless $e_n = 0$ which is only possible if $a_0 = -m -\frac{1}{m+2}$ a case we already have covered.</p>
<p>In conclusion we see that $a_{n+2} = a_n$ holds for all sufficiently large $n$.</p>
Answer 18: <p>If your function is $a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor)$ (on the photo, but not the question)</p>
<p>Then</p>
<p>If $a_0>0$ </p>
<p>$\lfloor a_n \rfloor < a_n$ and $ (a_n - \lfloor a_n \rfloor)<1$</p>
<p>$0 \le a_{n+1} < \lfloor a_n\rfloor] < a_n$ and $\lfloor a_{n+1}\rfloor \le \lfloor a_n \rfloor - 1$</p>
<p>The integer component is falling by at least 1 with every iteration.
eventually it must fall below 0. And which point your sequence equals 0 thereafter.</p>
<p>If $a_0 < 0$ the pattern is more complicated</p>
<p>it is possible for $a_{n+1}$ to jump to an integer and then to $0.$</p>
<p>if $1<a_n<0$
$a_{n+1} = -1 - a_n$ which is in $(-1,0)$</p>
<p>$a_{n+2} = a_n$</p>
<p>And a stable oscillation emerges.</p>
<p>$a_n < -1$ while it is possible that $|a_{n+1}| \ge |a_n|$ but in these cases $\lfloor a_{n+1} \rfloor = \lfloor a_{n} \rfloor$ and $a_{n+1} - \lfloor a_{n+1} \rfloor < a_{n} - \lfloor a_{n} \rfloor$ increasing the likelihood that $|a_{n+2}|<|a_{n+1}| - 1$ </p>
<p>The sequence either heads to $0$, or it falls into a regular oscillation.</p>
|
Polar transformation of a probability distribution function | 2,083,888 | 5 | https://math.stackexchange.com/questions/2083888/polar-transformation-of-a-probability-distribution-function | <p>I am working through Dirk P Kroese "Monte Carlo Methods" notes with one section based on Random Variable Generation from uniform random numbers using polar transformations (section 2.1.2.6). </p>
<p>The polar method is based on the polar coordinate transformation $ X=R \cos \Theta$, $Y=R\sin \Theta$, where $\Theta \sim \text{U}(0,2\pi)$ and $R\sim f_R$ are independent. Using standard transformation rules it follows that the joint pdf of $X$ and $Y$ satisfies:</p>
<p>$$f_{X,Y}(x,y)=\cfrac{f_R(r)}{2\pi r}$$</p>
<p>with $r=\sqrt{x^2+y^2}$.</p>
<p>I don't fully understand how expression $f_{X,Y}(x,y)$ is obtained from "standard transformation rules", please help or hint.</p>
| [
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
},
{
"answer_id": 2083918,
"score": 3,
"text": "<p>Rewrite the RHS as \\begin{align}\\frac{f_R(r)}{2\\pi r}&=f_R(r)\\cdot\\frac1{2\\pi}\\cdot\\frac1r=f_R(r)f_\\Theta(\\theta)\\frac1r=f_{R,\\Theta}(r,\\theta)\\frac1r\\end{align} (where we used that $\\Theta \\sim \\text{U}(0,2\\pi)$ hence $f_\\Theta(\\theta)=\\frac1{2\\pi}$ and that $R,\\Theta$ are independent) so that their assertion becomes $$f_{R,\\Theta}(r,\\theta)=f_{X,Y}(x,y)\\cdot r$$ where $x$ stands actually for $x(r,\\theta)=r\\cos{(\\theta)}$ and similarly $y$ stands for $y(r,\\theta)=r\\sin{(\\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\\det{(J)}=\\frac{\\partial x(r,\\theta)}{\\partial r}\\frac{\\partial y(r,\\theta)}{\\partial \\theta}-\\frac{\\partial x(r,\\theta)}{\\partial \\theta}\\frac{\\partial y(r,\\theta)}{\\partial r}=r\\cos^2{(\\theta)}+r\\sin^2{(\\theta)}=r\\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>\n"
},
{
"answer_id": 2083926,
"score": 3,
"text": "<p>Note that $P_{X,Y}(X=x \\land Y=y)=P_{R,\\theta}(R=\\sqrt{x^2+y^2} \\land \\tan\\theta=\\frac{y}{x})$. Now since $R$ and $\\theta$ are independent we have <br></p>\n\n<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\\frac{1}{2\\pi} drd\\theta$$\nNow since $dxdy=rdrd\\theta$, we have the desired result.</p>\n"
}
] | Question: Polar transformation of a probability distribution function
Answer 1: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 2: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 3: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 4: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 5: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 6: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 7: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 8: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 9: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 10: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 11: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 12: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 13: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 14: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 15: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 16: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
Answer 17: <p>Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ <strong>Note:</strong> I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).</p>
Answer 18: <p>Note that $P_{X,Y}(X=x \land Y=y)=P_{R,\theta}(R=\sqrt{x^2+y^2} \land \tan\theta=\frac{y}{x})$. Now since $R$ and $\theta$ are independent we have <br></p>
<p>$$f_{X,Y}(x,y)dxdy=f_R(R=r).\frac{1}{2\pi} drd\theta$$
Now since $dxdy=rdrd\theta$, we have the desired result.</p>
|
Doing definite integration $\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}$ | 2,084,474 | 5 | https://math.stackexchange.com/questions/2084474/doing-definite-integration-int-0-pi-4-fracx-dx-cos-x-cos-x-sin-x | <p>We have to solve the following integration
$$
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}
$$
I divided both in Nr and Dr by $\cos^2 x$.</p>
<p>But after that I stuck.</p>
| [
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
},
{
"answer_id": 2084511,
"score": 5,
"text": "<p>We have <span class=\"math-container\">$$I=\\int_{0}^{\\pi/4}\\frac{2x}{2\\cos^2x+2\\cos x\\sin x}dx=\\int_0^{\\pi/4}\\frac{2x}{\\cos 2x+\\sin 2x+1}dx=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{xdx}{\\cos x+\\sin x+1}=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{\\pi/2-x}{\\cos x+\\sin x+1}dx\\\\I=\\frac{1}{4}\\int_0^{\\pi/2}\\frac{\\pi/2}{\\cos x+\\sin x+1}dx$$</span>Now substitute <span class=\"math-container\">$x$</span> for <span class=\"math-container\">$2x$</span> again<span class=\"math-container\">$$=\\frac{\\pi}{4}\\int_0^{\\pi/4}\\frac{dx}{2\\cos^2x+2\\sin x\\cos x}=\\frac{\\pi}{8}\\int_0^{\\pi/4}\\frac{\\sec^2xdx}{1+\\tan x}=\\frac{\\pi}{8}\\int_0^{1}\\frac{1}{1+u}du=\\frac{\\pi}{8}\\log 2$$</span></p>\n"
},
{
"answer_id": 2084488,
"score": 4,
"text": "<p>Let $$I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\cos(x)+\\sin(x))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\cos(x)(\\sqrt2\\sin(x+\\frac{\\pi}{4}))} dx$$ </p>\n\n<p>$$ I(x)= \\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$ </p>\n\n<p>Since $$ I(x)=I(\\frac{\\pi}{4}-x)$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x)=\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\cos(\\frac{\\pi}{4}-x)(\\sqrt2\\sin(\\frac{\\pi}{4}-x+\\frac{\\pi}{4}))} dx$$</p>\n\n<p>$$ I(\\frac{\\pi}{4}-x) =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>But $$ I(x)=I(\\frac{\\pi}{4}-x)$$ </p>\n\n<p>so $$ I+I =\\int^{\\frac{\\pi}{4}}_0 \\frac{\\frac{\\pi}{4}-x}{\\sqrt2 \\cos(x)\\sin(x+\\frac{\\pi}{4})}+\\int^{\\frac{\\pi}{4}}_0 \\frac{x}{\\sqrt2\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>$$ 2I = \\frac{\\pi}{4\\sqrt2}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)\\sin(x+\\frac{\\pi}{4})} dx$$</p>\n\n<p>Can you continue?</p>\n\n<p>$$ I = \\frac{\\sqrt2\\pi}{16}\\int^{\\frac{\\pi}{4}}_0 \\frac{1}{\\cos(x)(\\frac{1}{\\sqrt{2}})(\\cos(x)+\\sin(x))} dx$$</p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec(x)}{\\sin(x)+\\cos(x)}\\cdot\\frac{\\frac{1}{\\cos(x)}}{\\frac{1}{\\cos(x)}} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8}\\int^{\\frac{\\pi}{4}}_0 \\frac{\\sec^2(x)}{\\tan(x)+1} dx$$ </p>\n\n<p>$$ I = \\frac{\\pi}{8} \\left[\\ln|\\tan(x)+1 \\right]^{\\frac{\\pi}{4}}_0$$</p>\n\n<p>$$ I = \\frac{\\pi}{8} \\cdot \\ln(2) $$</p>\n"
},
{
"answer_id": 2084502,
"score": 3,
"text": "<p>\\begin{align}\n\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)} &= \\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos^2(x)+\\cos(x)\\sin(x)}\\\\ &=\\int_0^{\\pi/4}\\frac{2x\\,dx}{\\cos(2x)+\\sin(2x)+1} \\\\&=\\frac{1}{2}\\int_0^{\\pi/2}\\frac{t\\,dt}{\\cos(t)+\\sin(t)+1} \n\\end{align}</p>\n\n<p>By $t \\to 1-t$\n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2-t)\\,dt}{\\cos(t)+\\sin(t)+1} $$</p>\n\n<p>Separate the integrals \n$$I =\\frac{1}{2}\\int_0^{\\pi/2}\\frac{(\\pi/2)\\,dt}{\\cos(t)+\\sin(t)+1} -I$$</p>\n\n<p>$$I = \\frac{\\pi}{8}\\int_0^{\\pi/2}\\frac{\\,dt}{\\cos(t)+\\sin(t)+1} = \\frac{\\pi}{8}\\log(2)$$</p>\n\n<p>The last integral could be solved using <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>.</p>\n"
},
{
"answer_id": 2084520,
"score": 1,
"text": "<p>Note that $$I=\\int_{0}^{\\pi/4}\\frac{x}{\\cos\\left(x\\right)\\left(\\cos\\left(x\\right)+\\sin\\left(x\\right)\\right)}dx=\\int_{0}^{\\pi/4}\\frac{x}{\\cos^{2}\\left(x\\right)\\left(1+\\tan\\left(x\\right)\\right)}dx\n $$ $$\\stackrel{\\tan\\left(x\\right)=u}{=}\\int_{0}^{1}\\frac{\\arctan\\left(u\\right)}{1+u}du\\stackrel{IBP}{=}\\frac{\\pi}{4}\\log\\left(2\\right)-\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\n $$ now note that $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du\\stackrel{u=\\frac{1-y}{1+y}}{=}\\int_{0}^{1}\\frac{\\log\\left(2\\right)-\\log\\left(1+y\\right)}{1+y^{2}}dy\n $$ hence $$\\int_{0}^{1}\\frac{\\log\\left(1+u\\right)}{1+u^{2}}du=\\frac{\\log\\left(2\\right)}{2}\\int_{0}^{1}\\frac{dy}{1+y^{2}}=\\frac{\\pi}{8}\\log\\left(2\\right)\n $$ so $$I=\\color{red}{\\frac{\\pi}{8}\\log\\left(2\\right)}.$$ </p>\n"
},
{
"answer_id": 2084549,
"score": 1,
"text": "<p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\\frac{\\pi}{4}$ and then the change of variables $u=-w$ to get:</p>\n\n<p>$I=\\displaystyle{\\int_{-\\frac{\\pi}{4}}^{0}\\frac{u}{\\cos u(\\cos u -\\sin u)}du+\\frac{\\pi}{4}\\int_{-\\frac{\\pi}{4}}^{0}\\frac{1}{\\cos u(\\cos u -\\sin u)}du}=\\\\ -\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{w}{\\cos w(\\cos w +\\sin w)}dw+\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>and therefore we have:</p>\n\n<p>$I=-I +\\displaystyle{\\frac{\\pi}{4}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw\\Rightarrow I=\\frac{\\pi}{8}\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw}$</p>\n\n<p>But with the substitution $y=tanw$ we obtain:</p>\n\n<p>$\\displaystyle{\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos w(\\cos w +\\sin w)}dw=\\int_{0}^{\\frac{\\pi}{4}}\\frac{1}{\\cos^2 w}\\frac{1}{(1 +\\tan w)}dw=\\int_{0}^{1}\\frac{1}{1+y}dy=\\ln2}$</p>\n\n<p>and finally $I=\\frac{\\pi}{8}\\ln2$.</p>\n\n<p><em>Note:</em> In the change of variables $x=u+\\frac{\\pi}{4}$ we used the trigonometric identities:</p>\n\n<p>$\\cos (x+y)=\\cos x \\cos y - \\sin x \\sin y \\\\ \\sin (x+y)=\\sin x \\cos y + \\cos x \\sin y$</p>\n"
},
{
"answer_id": 3790850,
"score": 0,
"text": "<p><span class=\"math-container\">\\begin{align}J&=\\int_0^{\\pi/4}\\frac{x\\,dx}{\\cos x(\\cos x + \\sin x)}\\\\&\\overset{\\text{IBP}}=\\Big[\\ln\\left(1+\\tan x\\right)x\\Big]_0^{\\pi/4}-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx\\\\\n\\int_0^{\\pi/4}\\ln\\left(1+\\tan x\\right)dx&\\overset{u=\\frac{\\pi}{4}-x}=\\int_0^{\\pi/4}\\ln\\left(1+\\frac{1-\\tan u}{1+\\tan u}\\right)du\\\\\n&=\\int_0^{\\pi/4}\\ln\\left(\\frac{2}{1+\\tan u}\\right)du\\\\\n&=\\frac{1}{4}\\pi\\ln 2-\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du\\\\\n\\end{align}</span>\nTherefore,\n<span class=\"math-container\">\\begin{align}\\int_0^{\\pi/4}\\ln\\left(1+\\tan u\\right)du&=\\frac{1}{8}\\pi\\ln 2\\\\\nJ&=\\boxed{\\frac{1}{8}\\pi\\ln 2}\n\\end{align}</span></p>\n<p>NB:\nIMHO when you have <span class=\"math-container\">$\\displaystyle \\int_a^b f(x)x dx$</span> check out if <span class=\"math-container\">$f$</span> have a simple antiderivative.\nOf course, check out first if <span class=\"math-container\">$f(a+b-x)=f(x)$</span></p>\n"
}
] | Question: Doing definite integration $\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}$
Answer 1: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 2: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 3: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 4: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 5: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 6: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 7: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 8: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 9: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 10: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 11: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 12: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 13: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 14: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 15: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 16: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 17: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 18: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 19: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 20: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 21: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 22: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 23: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 24: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 25: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 26: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 27: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 28: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 29: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 30: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 31: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 32: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 33: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 34: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 35: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 36: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 37: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 38: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 39: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 40: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 41: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 42: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 43: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 44: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 45: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 46: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 47: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 48: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
Answer 49: <p>We have <span class="math-container">$$I=\int_{0}^{\pi/4}\frac{2x}{2\cos^2x+2\cos x\sin x}dx=\int_0^{\pi/4}\frac{2x}{\cos 2x+\sin 2x+1}dx=\frac{1}{2}\int_0^{\pi/2}\frac{xdx}{\cos x+\sin x+1}=\frac{1}{2}\int_0^{\pi/2}\frac{\pi/2-x}{\cos x+\sin x+1}dx\\I=\frac{1}{4}\int_0^{\pi/2}\frac{\pi/2}{\cos x+\sin x+1}dx$$</span>Now substitute <span class="math-container">$x$</span> for <span class="math-container">$2x$</span> again<span class="math-container">$$=\frac{\pi}{4}\int_0^{\pi/4}\frac{dx}{2\cos^2x+2\sin x\cos x}=\frac{\pi}{8}\int_0^{\pi/4}\frac{\sec^2xdx}{1+\tan x}=\frac{\pi}{8}\int_0^{1}\frac{1}{1+u}du=\frac{\pi}{8}\log 2$$</span></p>
Answer 50: <p>Let $$I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\cos(x)+\sin(x))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\cos(x)(\sqrt2\sin(x+\frac{\pi}{4}))} dx$$ </p>
<p>$$ I(x)= \int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$ </p>
<p>Since $$ I(x)=I(\frac{\pi}{4}-x)$$</p>
<p>$$ I(\frac{\pi}{4}-x)=\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\cos(\frac{\pi}{4}-x)(\sqrt2\sin(\frac{\pi}{4}-x+\frac{\pi}{4}))} dx$$</p>
<p>$$ I(\frac{\pi}{4}-x) =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>But $$ I(x)=I(\frac{\pi}{4}-x)$$ </p>
<p>so $$ I+I =\int^{\frac{\pi}{4}}_0 \frac{\frac{\pi}{4}-x}{\sqrt2 \cos(x)\sin(x+\frac{\pi}{4})}+\int^{\frac{\pi}{4}}_0 \frac{x}{\sqrt2\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>$$ 2I = \frac{\pi}{4\sqrt2}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)\sin(x+\frac{\pi}{4})} dx$$</p>
<p>Can you continue?</p>
<p>$$ I = \frac{\sqrt2\pi}{16}\int^{\frac{\pi}{4}}_0 \frac{1}{\cos(x)(\frac{1}{\sqrt{2}})(\cos(x)+\sin(x))} dx$$</p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec(x)}{\sin(x)+\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{\frac{1}{\cos(x)}} dx$$ </p>
<p>$$ I = \frac{\pi}{8}\int^{\frac{\pi}{4}}_0 \frac{\sec^2(x)}{\tan(x)+1} dx$$ </p>
<p>$$ I = \frac{\pi}{8} \left[\ln|\tan(x)+1 \right]^{\frac{\pi}{4}}_0$$</p>
<p>$$ I = \frac{\pi}{8} \cdot \ln(2) $$</p>
Answer 51: <p>\begin{align}
\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} &= \int_0^{\pi/4}\frac{x\,dx}{\cos^2(x)+\cos(x)\sin(x)}\\ &=\int_0^{\pi/4}\frac{2x\,dx}{\cos(2x)+\sin(2x)+1} \\&=\frac{1}{2}\int_0^{\pi/2}\frac{t\,dt}{\cos(t)+\sin(t)+1}
\end{align}</p>
<p>By $t \to 1-t$
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2-t)\,dt}{\cos(t)+\sin(t)+1} $$</p>
<p>Separate the integrals
$$I =\frac{1}{2}\int_0^{\pi/2}\frac{(\pi/2)\,dt}{\cos(t)+\sin(t)+1} -I$$</p>
<p>$$I = \frac{\pi}{8}\int_0^{\pi/2}\frac{\,dt}{\cos(t)+\sin(t)+1} = \frac{\pi}{8}\log(2)$$</p>
<p>The last integral could be solved using <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
Answer 52: <p>Note that $$I=\int_{0}^{\pi/4}\frac{x}{\cos\left(x\right)\left(\cos\left(x\right)+\sin\left(x\right)\right)}dx=\int_{0}^{\pi/4}\frac{x}{\cos^{2}\left(x\right)\left(1+\tan\left(x\right)\right)}dx
$$ $$\stackrel{\tan\left(x\right)=u}{=}\int_{0}^{1}\frac{\arctan\left(u\right)}{1+u}du\stackrel{IBP}{=}\frac{\pi}{4}\log\left(2\right)-\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du
$$ now note that $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du\stackrel{u=\frac{1-y}{1+y}}{=}\int_{0}^{1}\frac{\log\left(2\right)-\log\left(1+y\right)}{1+y^{2}}dy
$$ hence $$\int_{0}^{1}\frac{\log\left(1+u\right)}{1+u^{2}}du=\frac{\log\left(2\right)}{2}\int_{0}^{1}\frac{dy}{1+y^{2}}=\frac{\pi}{8}\log\left(2\right)
$$ so $$I=\color{red}{\frac{\pi}{8}\log\left(2\right)}.$$ </p>
Answer 53: <p>Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get:</p>
<p>$I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>and therefore we have:</p>
<p>$I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$</p>
<p>But with the substitution $y=tanw$ we obtain:</p>
<p>$\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$</p>
<p>and finally $I=\frac{\pi}{8}\ln2$.</p>
<p><em>Note:</em> In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities:</p>
<p>$\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$</p>
Answer 54: <p><span class="math-container">\begin{align}J&=\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}\\&\overset{\text{IBP}}=\Big[\ln\left(1+\tan x\right)x\Big]_0^{\pi/4}-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan x\right)dx\\
\int_0^{\pi/4}\ln\left(1+\tan x\right)dx&\overset{u=\frac{\pi}{4}-x}=\int_0^{\pi/4}\ln\left(1+\frac{1-\tan u}{1+\tan u}\right)du\\
&=\int_0^{\pi/4}\ln\left(\frac{2}{1+\tan u}\right)du\\
&=\frac{1}{4}\pi\ln 2-\int_0^{\pi/4}\ln\left(1+\tan u\right)du\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}\int_0^{\pi/4}\ln\left(1+\tan u\right)du&=\frac{1}{8}\pi\ln 2\\
J&=\boxed{\frac{1}{8}\pi\ln 2}
\end{align}</span></p>
<p>NB:
IMHO when you have <span class="math-container">$\displaystyle \int_a^b f(x)x dx$</span> check out if <span class="math-container">$f$</span> have a simple antiderivative.
Of course, check out first if <span class="math-container">$f(a+b-x)=f(x)$</span></p>
|
Sampling Distribution of sample mean for Poisson Distribution | 2,085,072 | 5 | https://math.stackexchange.com/questions/2085072/sampling-distribution-of-sample-mean-for-poisson-distribution | <p><a href="https://i.stack.imgur.com/GdnAn.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/GdnAn.jpg" alt="enter image description here"></a></p>
<p>I am particularly struggling with part b, I don't know where to begin. For part a, I think the answer is that the sampling distribution is a Poisson(n$\lambda$).</p>
| [
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
},
{
"answer_id": 2622872,
"score": 4,
"text": "<p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>\n\n<p>$$\nP(X_i = x_i) = f(x_i) = \\frac{e^{-\\lambda}\\lambda^{x_i}} {x_i!}\n$$\nwhere $x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\\Sigma x_i$ for $x_i$ the sampling distribution for $\\Sigma X_i$ is given by:</p>\n\n<p>$$\nP(\\Sigma X_i = \\Sigma x_i) = g(\\Sigma x_i) = \\frac{e^{-n\\lambda}(n\\lambda)^{\\Sigma x_i}} {(\\Sigma x_i)!}\n$$\nwhere $\\Sigma x_i \\in \\{0, 1, 2 ...\\}$</p>\n\n<p>the sampling distribution for the sample mean, $\\bar{X}$, is derived using the relation $\\Sigma X_i =n\\bar{X} $ as follows:</p>\n\n<p>$$\nh(\\bar{x}) = P(\\bar{X} = \\bar{x}) = P(\\Sigma X_i = n\\bar{x}) = g(n\\bar{x})\n$$</p>\n\n<p>$$\n= \\frac{e^{-n\\lambda}(n\\lambda)^{n\\bar{x}}} {(n\\bar{x})!}\n$$\nwhere $\\bar{x} \\in \\{0, \\frac{1}{n}, \\frac{2}{n} ...\\}$</p>\n\n<p>For $ n \\ne 1$ This is not a poisson distribution as the pdf is not of the form $ \\frac{e^{-\\lambda}\\lambda^{\\bar{x}}} {\\bar{x}!}$.</p>\n\n<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>\n\n<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>\n\n<p><a href=\"https://i.stack.imgur.com/AvP2k.png\" rel=\"nofollow noreferrer\">N = 2, λ = 3</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/VKCpo.png\" rel=\"nofollow noreferrer\">N = 10, λ = 2</a></p>\n"
},
{
"answer_id": 2085244,
"score": 1,
"text": "<p>Your answer is related to this subject</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>\n\n<p>Since each variable in the sample is Poisson($\\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \\lambda$).</p>\n\n<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>\n\n<p>Well the section(b) is not much more different. </p>\n"
},
{
"answer_id": 3528097,
"score": 1,
"text": "<p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 0}) = \\binom{n}{0}(e^{-\\lambda})^n = e^{-n\\lambda} ; \\;$</span> every x has to be 0. </p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 1}) = \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda}{1!}e^{-\\lambda}) = n\\lambda e^{-n\\lambda} ; \\;$</span> exactly one x has to be 1.</p>\n\n<p><span class=\"math-container\">$P(\\sum{x_i = 2}) = \\binom{n}{2}(e^{-\\lambda})^{n-2}(\\frac{\\lambda}{1!}e^{-\\lambda})^2 + \\binom{n}{1}(e^{-\\lambda})^{n-1}(\\frac{\\lambda^2}{2!}e^{-\\lambda}) = \\frac{(n\\lambda)^2}{2!} e^{-n\\lambda} ; \\;$</span> either 2 x's are 1, or 1 x is 2. </p>\n\n<p>.</p>\n\n<p>.</p>\n\n<p>And so by induction <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span></p>\n\n<hr>\n\n<p>An \"easier\" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>\n\n<p><span class=\"math-container\">$M_{\\sum{x_i}}(t) = \\mathbf{E}(e^{\\sum{x_i} t}) = \\mathbf{E}(\\prod e^{x_i t}) = \\prod\\mathbf{E}(e^{x_i t}) = \\prod\\mathbf{E}(e^{x t}) = \\prod M_{x}(t) = M_{x}(t)^n$</span></p>\n\n<p>The MGF of Poisson is <span class=\"math-container\">$M_x(t) = e^{\\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class=\"math-container\">$M_{\\sum x_i}(t) = e^{n\\lambda(e^t-1)}$</span>, so by this we can say that <span class=\"math-container\">$\\sum{x_i} \\sim Pois(n\\lambda)$</span>.</p>\n\n<p>Since <span class=\"math-container\">$\\sum{x_i} = n \\bar{x}$</span>, then <span class=\"math-container\">$\\bar x \\sim \\frac{1}{n}Pois(n\\lambda)$</span></p>\n"
}
] | Question: Sampling Distribution of sample mean for Poisson Distribution
Answer 1: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 2: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 3: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 4: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 5: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 6: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 7: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 8: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 9: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 10: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 11: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 12: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 13: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 14: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 15: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 16: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 17: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 18: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 19: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 20: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 21: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 22: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 23: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 24: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
Answer 25: <p>The sampling distribution of a Poisson(λ) distributed random variable is given by:</p>
<p>$$
P(X_i = x_i) = f(x_i) = \frac{e^{-\lambda}\lambda^{x_i}} {x_i!}
$$
where $x_i \in \{0, 1, 2 ...\}$</p>
<p>As you have correctly suggested the sum is Poisson(nλ) and therefore, substituting nλ for λ and $\Sigma x_i$ for $x_i$ the sampling distribution for $\Sigma X_i$ is given by:</p>
<p>$$
P(\Sigma X_i = \Sigma x_i) = g(\Sigma x_i) = \frac{e^{-n\lambda}(n\lambda)^{\Sigma x_i}} {(\Sigma x_i)!}
$$
where $\Sigma x_i \in \{0, 1, 2 ...\}$</p>
<p>the sampling distribution for the sample mean, $\bar{X}$, is derived using the relation $\Sigma X_i =n\bar{X} $ as follows:</p>
<p>$$
h(\bar{x}) = P(\bar{X} = \bar{x}) = P(\Sigma X_i = n\bar{x}) = g(n\bar{x})
$$</p>
<p>$$
= \frac{e^{-n\lambda}(n\lambda)^{n\bar{x}}} {(n\bar{x})!}
$$
where $\bar{x} \in \{0, \frac{1}{n}, \frac{2}{n} ...\}$</p>
<p>For $ n \ne 1$ This is not a poisson distribution as the pdf is not of the form $ \frac{e^{-\lambda}\lambda^{\bar{x}}} {\bar{x}!}$.</p>
<p>The sum distribution is stretched by factor 1/n as can be seen in these plots </p>
<p>(Note that the probabilities are only defined for the plotted circles, trendlines are just for visualising).</p>
<p><a href="https://i.stack.imgur.com/AvP2k.png" rel="nofollow noreferrer">N = 2, λ = 3</a></p>
<p><a href="https://i.stack.imgur.com/VKCpo.png" rel="nofollow noreferrer">N = 10, λ = 2</a></p>
Answer 26: <p>Your answer is related to this subject</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_convolutions_of_probability_distributions</a></p>
<p>Since each variable in the sample is Poisson($\lambda$) distributed(and they are essentially independent), the sum would be, as you said, Poisson($n \lambda$).</p>
<p>The proof is simply deduced from the fact that the characteristic function of sum of independent random variables is the multiplication of their characteristic function. And using the inverse formula gives its density function(in case of discrete random variables their mass probability function).</p>
<p>Well the section(b) is not much more different. </p>
Answer 27: <p>I think the standard way of solving this is using induction. I.e. computing each probability and inducing the pattern. e.g. </p>
<p><span class="math-container">$P(\sum{x_i = 0}) = \binom{n}{0}(e^{-\lambda})^n = e^{-n\lambda} ; \;$</span> every x has to be 0. </p>
<p><span class="math-container">$P(\sum{x_i = 1}) = \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda}{1!}e^{-\lambda}) = n\lambda e^{-n\lambda} ; \;$</span> exactly one x has to be 1.</p>
<p><span class="math-container">$P(\sum{x_i = 2}) = \binom{n}{2}(e^{-\lambda})^{n-2}(\frac{\lambda}{1!}e^{-\lambda})^2 + \binom{n}{1}(e^{-\lambda})^{n-1}(\frac{\lambda^2}{2!}e^{-\lambda}) = \frac{(n\lambda)^2}{2!} e^{-n\lambda} ; \;$</span> either 2 x's are 1, or 1 x is 2. </p>
<p>.</p>
<p>.</p>
<p>And so by induction <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span></p>
<hr>
<p>An "easier" way is using the moment generating function (<strong>MGF</strong>) (or alternatively the characteristic function). For this you need a small conjecture: </p>
<p><span class="math-container">$M_{\sum{x_i}}(t) = \mathbf{E}(e^{\sum{x_i} t}) = \mathbf{E}(\prod e^{x_i t}) = \prod\mathbf{E}(e^{x_i t}) = \prod\mathbf{E}(e^{x t}) = \prod M_{x}(t) = M_{x}(t)^n$</span></p>
<p>The MGF of Poisson is <span class="math-container">$M_x(t) = e^{\lambda(e^t-1)}$</span>, so the MGF of the sum is <span class="math-container">$M_{\sum x_i}(t) = e^{n\lambda(e^t-1)}$</span>, so by this we can say that <span class="math-container">$\sum{x_i} \sim Pois(n\lambda)$</span>.</p>
<p>Since <span class="math-container">$\sum{x_i} = n \bar{x}$</span>, then <span class="math-container">$\bar x \sim \frac{1}{n}Pois(n\lambda)$</span></p>
|
Asymptotic behavior of Harmonic-like series $\sum_{n=1}^{k} \psi(n) \psi'(n) - (\ln k)^2/2$ as $k \to \infty$ | 2,085,313 | 5 | https://math.stackexchange.com/questions/2085313/asymptotic-behavior-of-harmonic-like-series-sum-n-1k-psin-psin | <p>I would like to obtain a closed form for the following limit:
$$I_2=\lim_{k\to \infty} \left ( - (\ln k)^2/2 +\sum_{n=1}^{k} \psi(n) \, \psi'(n) \right)$$</p>
<p>Here $\psi(n)$ is digamma function.</p>
<p>Using the method detailed in <a href="https://math.stackexchange.com/a/874925/14825">this answer</a>,
I was able to compute simpler, related series:
$$\lim_{k\to \infty} \sum_{n=1}^{k} \left (\psi'(n) -1/n \right) =1 $$
$$\sum_{n=1}^{\infty} \psi''(n) =-\frac{\pi^2}{3} $$
But $I_2$ seems to be tougher because of the product of two digamma's.
The divergence of $(\ln k)^2$ is matched by the first terms of asymptotic series for $\psi(n) \psi'(n)$, via the definition of the <a href="https://en.wikipedia.org/wiki/Stieltjes_constants#Representations" rel="nofollow noreferrer">Stieltjes number</a>, $$ \lim_{k\to \infty} \left ( \sum_{n=1}^{k} \frac{\ln n}{n} - (\ln k)^2/2 \right ) =\gamma_1 $$
but I am stuck with the reminder term.</p>
<p><strong>Side remark</strong>: the problem originates in physics, see <a href="https://math.stackexchange.com/questions/1393334/fourier-transform-of-gamma-left-frac12-i-fracp2-pi-right-sqrt">my older question</a>. In particular, I was able to show that $\langle x \rangle \approx -0.251022$ defined in that question actually equals exactly $-(1+\gamma_0)/(2 \pi)$ where $\gamma_0$ is Euler's constant. This answer I seek here is the only piece missing on my path to closed form $\langle x^2 \rangle$. </p>
| [
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
},
{
"answer_id": 2089162,
"score": 7,
"text": "<p><strong>Hint</strong>. One may obtain, for $N\\ge1$,</p>\n\n<blockquote>\n <p>$$\n\\begin{align}\n\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)&=\\left(N\\psi(N)-N+\\frac12\\right)\\psi'(N)+\\frac{\\psi(N)^2}2- \\psi(N)+\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2 \\tag1\n\\end{align}\n$$ </p>\n</blockquote>\n\n<p>equivalently, for $N\\ge1$,\n$$\n\\small{\\begin{align}\n&\\sum_{n=1}^{N} \\left(H_{n-1}-\\gamma\\right)\\left(\\frac{\\pi^2}{6}-H_{n-1,2}\\right)\n\\\\&=\\left(NH_{N-1}-(\\gamma+1)N+\\frac12\\right)\\left(\\frac{\\pi^2}{6}-H_{N-1,2}\\right)+\\frac{\\left(H_{N-1}-\\gamma\\right)^2}2-\\frac{H_{N-1,2}}2-H_{N-1}+\\frac{\\pi^2}{6}-\\frac{\\gamma^2}2, \n\\end{align}}\n$$ then, one may recall the standard <a href=\"http://dlmf.nist.gov/5.11\" rel=\"noreferrer\">asymptotic expansions</a> of the polygamma functions $\\psi=\\Gamma'/\\Gamma$ and $\\psi'$, as $X \\to \\infty$,\n$$\n\\begin{align}\n\\psi(X)&= \\ln X- \\frac{1}{2X}-\\frac{1}{12X^2}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\\\\\psi'(X)&= \\frac{1}{X}+\\frac{1}{2 X^2}+\\frac{1}{6 X^3}+\\mathcal{O}\\left(\\frac{1}{X^4}\\right)\n\\end{align}\n$$ yielding, as $N \\to \\infty$,</p>\n\n<blockquote>\n <p>$$\n-\\frac{(\\ln N)^2}2 +\\sum_{n=1}^{N} \\psi(n) \\, \\psi'(n)=\\color{#9E15E3}{\\frac{\\pi^2}{12}-\\gamma-\\frac{\\gamma^2}2-1}+\\frac{\\ln N}{12N^2}-\\frac{1}{24N^2}+\\mathcal{O}\\left(\\frac{\\ln N}{N^4}\\right) \\tag2\n$$ </p>\n</blockquote>\n\n<p>then one gets the desired limit.</p>\n\n<p>To prove $(1)$ one may use a <a href=\"https://en.wikipedia.org/wiki/Summation_by_parts#Statement\" rel=\"noreferrer\">summation by parts</a> with\n$$\nf_n=\\psi(n),\\quad g_{n+1}-g_n=\\psi'(n),\\quad g_1=\\frac{\\pi^2}6,\\quad n\\ge1.\n$$\nThe above asymptotic expansion can be obtained at any order.</p>\n"
},
{
"answer_id": 2085752,
"score": 2,
"text": "<p><em>This is not an answer.</em></p>\n\n<p>Considering $$I_2(k)=\\sum_{n=1}^k \\psi ^{(0)}(n)\\,\\, \\psi ^{(1)}(n)-\\frac{\\log ^2(k)}{2}$$ being stuck, I generated a few values\n$$\\left(\n\\begin{array}{cc}\n k & I_2(k) \\\\\n 1000 & -0.92133705940182217478 \\\\\n 2000 & -0.92133744544582151652 \\\\\n 3000 & -0.92133752387787776354 \\\\\n 4000 & -0.92133755278727115778 \\\\\n 5000 & -0.92133756665730197883 \\\\\n 6000 & -0.92133757440092091621 \\\\\n 7000 & -0.92133757917436506891 \\\\\n 8000 & -0.92133758233024150188 \\\\\n 9000 & -0.92133758452841907625 \\\\\n 10000 & -0.92133758612266202362\n\\end{array}\n\\right)$$ The inverse symbolic calculators I used did not identify the result.</p>\n"
}
] | Question: Asymptotic behavior of Harmonic-like series $\sum_{n=1}^{k} \psi(n) \psi'(n) - (\ln k)^2/2$ as $k \to \infty$
Answer 1: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 2: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 3: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 4: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 5: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 6: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 7: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 8: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 9: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 10: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 11: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 12: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 13: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 14: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 15: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 16: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
Answer 17: <p><strong>Hint</strong>. One may obtain, for $N\ge1$,</p>
<blockquote>
<p>$$
\begin{align}
\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1
\end{align}
$$ </p>
</blockquote>
<p>equivalently, for $N\ge1$,
$$
\small{\begin{align}
&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)
\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2,
\end{align}}
$$ then, one may recall the standard <a href="http://dlmf.nist.gov/5.11" rel="noreferrer">asymptotic expansions</a> of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,
$$
\begin{align}
\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)
\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)
\end{align}
$$ yielding, as $N \to \infty$,</p>
<blockquote>
<p>$$
-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2
$$ </p>
</blockquote>
<p>then one gets the desired limit.</p>
<p>To prove $(1)$ one may use a <a href="https://en.wikipedia.org/wiki/Summation_by_parts#Statement" rel="noreferrer">summation by parts</a> with
$$
f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.
$$
The above asymptotic expansion can be obtained at any order.</p>
Answer 18: <p><em>This is not an answer.</em></p>
<p>Considering $$I_2(k)=\sum_{n=1}^k \psi ^{(0)}(n)\,\, \psi ^{(1)}(n)-\frac{\log ^2(k)}{2}$$ being stuck, I generated a few values
$$\left(
\begin{array}{cc}
k & I_2(k) \\
1000 & -0.92133705940182217478 \\
2000 & -0.92133744544582151652 \\
3000 & -0.92133752387787776354 \\
4000 & -0.92133755278727115778 \\
5000 & -0.92133756665730197883 \\
6000 & -0.92133757440092091621 \\
7000 & -0.92133757917436506891 \\
8000 & -0.92133758233024150188 \\
9000 & -0.92133758452841907625 \\
10000 & -0.92133758612266202362
\end{array}
\right)$$ The inverse symbolic calculators I used did not identify the result.</p>
|
Weird Integration problem: $\int_{-2}^{2} \frac{x^2}{1+5^x}dx$ | 2,085,415 | 5 | https://math.stackexchange.com/questions/2085415/weird-integration-problem-int-22-fracx215xdx | <p>$\int_{-2}^{2} \frac{x^2}{1+5^x}dx$</p>
<p>I am stuck at the first step and have tried replacing $5^x$ with $e^{\ln(5^x)}$ but nothing simplifies out in the end. </p>
<p>Any hints how I should proceed?</p>
| [
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
},
{
"answer_id": 2085428,
"score": 15,
"text": "<p>$$I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx$$</p>\n\n<p>Let $y=-x$, then:</p>\n\n<p>$$I=\\int_{-2}^{2} \\frac{y^2}{1+5^{-y}}dy=\\int_{-2}^{2} \\frac{y^25^y}{1+5^y}dy=\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx$$</p>\n\n<p>So:</p>\n\n<p>$$I+I=\\int_{-2}^{2} \\frac{x^2}{1+5^x}dx+\\int_{-2}^{2} \\frac{x^25^x}{1+5^x}dx=\\int_{-2}^2x^2\\,dx=2\\int_0^2x^2\\,dx=2\\left(\\frac{2^3}{3}\\right)=\\frac{16}{3}$$</p>\n\n<p>So, the integral is equal to $\\dfrac{8}{3}$.</p>\n\n<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>\n\n<p>$$\\int_{x=-2}^{x=2}f(x)\\,dx=\\int_{-y=-2}^{-y=2}f(-y)(-\\,dy)=-\\int_{y=2}^{y=-2}f(-y)\\,dy=\\int_{y=-2}^{y=2}f(-y)\\,dy$$</p>\n\n<p>$$=\\int_{-2}^{2}f(-y)\\,dy$$</p>\n"
},
{
"answer_id": 2085426,
"score": 7,
"text": "<p>$$\n\\int_{-2}^{2}\\frac{x^2}{1+5^x}\\,dx\n=\n\\int_{-2}^{0}\\frac{t^2}{1+5^t}\\,dt\n+\n\\int_{0}^{2}\\frac{x^2}{1+5^x}\\,dx\n$$\nSubstitute $t=-x$ in the first integral to get\n$$\n\\int_{0}^{2}\\frac{x^25^x}{1+5^x}\\,dx\n$$</p>\n"
}
] | Question: Weird Integration problem: $\int_{-2}^{2} \frac{x^2}{1+5^x}dx$
Answer 1: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 2: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 3: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 4: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 5: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 6: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 7: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 8: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 9: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 10: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 11: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 12: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 13: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 14: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 15: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 16: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
Answer 17: <p>$$I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx$$</p>
<p>Let $y=-x$, then:</p>
<p>$$I=\int_{-2}^{2} \frac{y^2}{1+5^{-y}}dy=\int_{-2}^{2} \frac{y^25^y}{1+5^y}dy=\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx$$</p>
<p>So:</p>
<p>$$I+I=\int_{-2}^{2} \frac{x^2}{1+5^x}dx+\int_{-2}^{2} \frac{x^25^x}{1+5^x}dx=\int_{-2}^2x^2\,dx=2\int_0^2x^2\,dx=2\left(\frac{2^3}{3}\right)=\frac{16}{3}$$</p>
<p>So, the integral is equal to $\dfrac{8}{3}$.</p>
<p><strong>Edit:</strong> A clarification on the changing of limits under $y=-x$:</p>
<p>$$\int_{x=-2}^{x=2}f(x)\,dx=\int_{-y=-2}^{-y=2}f(-y)(-\,dy)=-\int_{y=2}^{y=-2}f(-y)\,dy=\int_{y=-2}^{y=2}f(-y)\,dy$$</p>
<p>$$=\int_{-2}^{2}f(-y)\,dy$$</p>
Answer 18: <p>$$
\int_{-2}^{2}\frac{x^2}{1+5^x}\,dx
=
\int_{-2}^{0}\frac{t^2}{1+5^t}\,dt
+
\int_{0}^{2}\frac{x^2}{1+5^x}\,dx
$$
Substitute $t=-x$ in the first integral to get
$$
\int_{0}^{2}\frac{x^25^x}{1+5^x}\,dx
$$</p>
|
Dimensional Analyses in Integrals Producing Logarithms | 2,085,680 | 5 | https://math.stackexchange.com/questions/2085680/dimensional-analyses-in-integrals-producing-logarithms | <p>Say I have equations of the form $f(x) = 1/x$ (to illustrate the simplest case of the problem) or $g(x) = \frac{1}{a - b \cdot x/c}$ and I want to find the integrals.</p>
<p>Normally, these would both be pretty straightforward. $\int f(x)dx = \ln(x) + C$, and $\int g(x)dx = -\frac{c}{b}\ln(a \cdot c - b \cdot x) + C$.</p>
<p>Suppose now, however, that $a$, $b$, $c$, and $x$ are dimensionful quantities; more specifically, $a$ has the same units as $b$ and $x$ has the same units as $c$. We would expect that $\int \frac{dx}{x}$ should be dimensionless, and indeed the $\ln$ function does have a pure number result... but the argument to $\ln$ should also be dimensionless, which $x$ is <em>not</em>, which leads to a problem in interpretation.</p>
<p>Similarly, $\int g(x)dx$ should have units of $type(x)$ over $type(a)$, which indeed it does (since we defined $x$ & $c$ to have the same units, and $a$ and $b$ to have the same units), but it also involves a non-dimensionless argument to $\ln$.</p>
<p>So, how can this be resolved? How does one go about integrating inverse units in a way that makes sense?</p>
| [
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
},
{
"answer_id": 2085682,
"score": 4,
"text": "<p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\\int_a^b 1/x dx = \\ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>\n\n<p>Back in terms of $\\ln(|b|)-\\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\\ln(cx)-\\ln(cy)=\\ln(c)+\\ln(x)-\\ln(c)-\\ln(y)=\\ln(x)-\\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>\n"
},
{
"answer_id": 2085717,
"score": 2,
"text": "<p>normally we say that the argument to any of the transcendental functions</p>\n\n<p>$$f(x) = \\sum\\limits_{n=0}^{\\infty} a_n \\, x^n $$</p>\n\n<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>\n\n<p>power functions</p>\n\n<p>$$ f(x) = x^p $$</p>\n\n<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>\n\n<p>the $\\log(\\cdot)$ function might be a little different. it's not particularly good form but since</p>\n\n<p>$$ \\log(a \\, b) = \\log(a) + \\log(b) $$</p>\n\n<p>the $\\log(x \\cdot \\textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\\log(x)$ and the $\\log(\\textsf{<unit>})$. This sorta works out with, say:</p>\n\n<p>$$\\begin{align}\n\\log \\left( \\frac{1 \\textsf{ mile}}{1 \\textsf{ km}} \\right) &= \\log(1 \\textsf{ mile}) - \\log(1 \\textsf{ km}) \\\\\n &= \\log(1) + \\log(\\textsf{ mile}) - (\\log(1) + \\log(\\textsf{ km})) \\\\\n &= \\log(1.609344) \\\\ \n & \\approx 0.47582664254 \\\\\n\\end{align}$$</p>\n\n<p>which is dimensionless. so somehow the $\\log(\\textsf{mile})$ and $\\log(\\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>\n\n<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>\n"
}
] | Question: Dimensional Analyses in Integrals Producing Logarithms
Answer 1: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 2: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 3: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 4: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 5: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 6: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 7: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 8: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 9: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 10: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 11: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 12: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 13: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 14: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 15: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 16: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
Answer 17: <p>Part of the problem here is that indefinite integration does not really make much sense; it is best regarded as a notational shorthand. The meaningful quantity is a definite integral: $\int_a^b f(x) dx$. For $f(x)=1/x$, this <em>does</em> make dimensional sense, because as long as $a,b$ have the same sign, $\int_a^b 1/x dx = \ln(b/a)$. (If $a,b$ have opposite signs then the integral just plain fails to converge classically.)</p>
<p>Back in terms of $\ln(|b|)-\ln(|a|)$, this still makes sense for dimensional $a,b$, because for any two choices of units of $a,b$, the two differ by a multiplicative constant. And $\ln(cx)-\ln(cy)=\ln(c)+\ln(x)-\ln(c)-\ln(y)=\ln(x)-\ln(y)$ (assuming $c,x,y>0$ for ease of presentation). So you have invariance under change of units, which is what we should have for a dimensionless quantity.</p>
Answer 18: <p>normally we say that the argument to any of the transcendental functions</p>
<p>$$f(x) = \sum\limits_{n=0}^{\infty} a_n \, x^n $$</p>
<p>must be dimensionless because of the Maclauren series used to evaluate it has many different powers of $x$, the coefficients $a_n$ are dimensionless, and the only way you can meaningfully add those terms is if they are all dimensionless, which means $x$ must be dimensionless.</p>
<p>power functions</p>
<p>$$ f(x) = x^p $$</p>
<p>can take a dimensionful argument and will return another dimensionful result that is the dimension of the argument raised to the $p$ power.</p>
<p>the $\log(\cdot)$ function might be a little different. it's not particularly good form but since</p>
<p>$$ \log(a \, b) = \log(a) + \log(b) $$</p>
<p>the $\log(x \cdot \textsf{<unit>})$ where $x$ is dimensionless and multiplies the unit token, can be thought of the sum of the $\log(x)$ and the $\log(\textsf{<unit>})$. This sorta works out with, say:</p>
<p>$$\begin{align}
\log \left( \frac{1 \textsf{ mile}}{1 \textsf{ km}} \right) &= \log(1 \textsf{ mile}) - \log(1 \textsf{ km}) \\
&= \log(1) + \log(\textsf{ mile}) - (\log(1) + \log(\textsf{ km})) \\
&= \log(1.609344) \\
& \approx 0.47582664254 \\
\end{align}$$</p>
<p>which is dimensionless. so somehow the $\log(\textsf{mile})$ and $\log(\textsf{km})$ must differ by a dimensionless constant. but i would not express it that way.</p>
<p>i would generally insist that only like-dimensioned quantities may be added, subtracted, equated, or compared. that multiplication of dimensioned quantities multiplies their dimensions and that division and power (or root) functions reflect that convention, and that transcendentals must have dimensionless arguments.</p>
|
Definition of Product Topology | 2,086,178 | 5 | https://math.stackexchange.com/questions/2086178/definition-of-product-topology | <blockquote>
<p><strong>Definition:</strong> If $X$ and $Y$ are topological spaces. The <em>product topology</em> on $X \times Y$ is the topology having basis the collection $\mathcal{B}$ of all sets of the form $U \times V$, where $U$ is an open set of $X$ and $V$ is an open set of $Y$</p>
</blockquote>
<p>I've taken this definition from <em>Munkres: Topology - A First Course</em> and the notation he uses at times can be highly confusing. For example he uses $x \times y$ to denote an ordered pair $(x, y)$.</p>
<p>Now I'm not sure in the definition whether $U \times V$ is denoting the cartesian product of the two sets, or the ordered pair $(U, V)$</p>
<p>So which of the following is the correct basis for the product topology as stated in the definition above, given that $\mathcal{T_X}$ is the topology on $X$ and $\mathcal{T_Y}$ is the topology on $Y$?</p>
<p>$$\mathcal{B} = \left\{U \times V \ \middle| \ U \in \mathcal{T_X} \ \text{ and } \ V \in \mathcal{T_Y}\right\}$$
$$\mathcal{B'} = \left\{(U, V) \ \middle| \ U \in \mathcal{T_X} \ \text{ and } \ V \in \mathcal{T_Y}\right\} = \mathcal{T_X} \times \mathcal{T_Y}$$</p>
| [
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
},
{
"answer_id": 2086233,
"score": 3,
"text": "<p>As already said in comments, the basis consists of Cartesian products $U\\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>\n\n<p>You can notice that that other possibility does not make sense, since sets from $\\mathcal B$ must be subsets of $X\\times Y$.</p>\n\n<hr>\n\n<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a>, which has different properties from <a href=\"https://en.wikipedia.org/wiki/Product_topology\" rel=\"nofollow noreferrer\">product topology</a>.)</p>\n"
},
{
"answer_id": 4424754,
"score": 0,
"text": "<p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>\n<p>If you look at <span class=\"math-container\">$\\mathcal B$</span> and <span class=\"math-container\">$\\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class=\"math-container\">$U \\times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class=\"math-container\">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class=\"math-container\">$\\mathcal B'$</span>. Therefore, <span class=\"math-container\">$\\mathcal B'$</span> can't possibly be correct, so it must be <span class=\"math-container\">$\\mathcal B$</span>.</p>\n"
}
] | Question: Definition of Product Topology
Answer 1: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 2: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 3: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 4: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 5: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 6: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 7: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 8: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 9: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 10: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 11: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 12: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 13: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 14: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 15: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 16: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
Answer 17: <p>As already said in comments, the basis consists of Cartesian products $U\times V$, where $U$ is open in $X$ and $V$ is open in $Y$.</p>
<p>You can notice that that other possibility does not make sense, since sets from $\mathcal B$ must be subsets of $X\times Y$.</p>
<hr>
<p>Product of finitely many topological spaces is defined in similar way as for two spaces. But perhaps it is worth to add a word of warning that if you define product of arbitrarily many spaces, then you have to be a bit more careful. (If you simply take products of open sets, you would get <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a>, which has different properties from <a href="https://en.wikipedia.org/wiki/Product_topology" rel="nofollow noreferrer">product topology</a>.)</p>
Answer 18: <p>I just want to spell this out a little more explicitly for anyone who's confused about this. Here's an easy way to tell which of <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span> is correct. If you know what a base/basis is, then you know that you're supposed to be able to create any open set (element in the topology) through union of elements (sets) in the basis.</p>
<p>If you look at <span class="math-container">$\mathcal B$</span> and <span class="math-container">$\mathcal B'$</span>, you'll see that elements in the former are sets (such as <span class="math-container">$U \times V$</span>), but elements in the latter are <em>pairs</em> of sets (such as <span class="math-container">$(U, V)$</span>). The union operation is applied to sets, so it's not even applicable to elements of <span class="math-container">$\mathcal B'$</span>. Therefore, <span class="math-container">$\mathcal B'$</span> can't possibly be correct, so it must be <span class="math-container">$\mathcal B$</span>.</p>
|
Proving that $\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} = \log 2$ | 2,086,393 | 5 | https://math.stackexchange.com/questions/2086393/proving-that-sum-n-1-infty-frac-1n1n-log-2 | <p>As stated in the title, I am trying to prove that $\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} = \log2$, I think I need to use the Geometric Series like this:</p>
<p>$$1+u+u^2+...+u^n = \frac{1-u^{n+1}}{1-u}$$</p>
<p>Then setting $u = -t$, rewrite it like so:</p>
<p>$$1-t+t^2-...+(-1)^n t^n + \frac{(-1)^{n+1}t^{n+1}}{1+t}= \frac{1}{1+t}$$</p>
<p>But now I am not sure where to go next in order to get to my conclusion, any guidance and explanation would be appreciated, thanks!</p>
| [
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
},
{
"answer_id": 2086417,
"score": 4,
"text": "<p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>\n\n<p>$$1 - \\frac{1}{2} + \\frac{1}{3} -\\cdots + (-1)^n\\frac{1}{n+1} + (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = \\log 2$$</p>\n\n<p>We want to show </p>\n\n<p>$$\\lim_{n\\to\\infty} (-1)^{n+1}\\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt = 0\\tag{*}$$</p>\n\n<p>To do so, we use the inequality $1 + t \\ge 1$ for $t \\ge 0$ to obtain </p>\n\n<p>$$\\left\\lvert (-1)^{n-1} \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt\\right\\rvert = \\int_0^1 \\frac{t^{n+1}}{1+t}\\, dt \\le \\int_0^1 t^{n+1}\\, dt = \\frac{1}{n+2}$$</p>\n\n<p>Since $\\lim_{n\\to \\infty} \\frac{1}{n+2} = 0$, (*) holds.</p>\n"
},
{
"answer_id": 2086414,
"score": 2,
"text": "<p><strong>hint</strong></p>\n\n<p>Let $f(x)=\\ln(1+x)$.\n$f$ is $C^{\\infty}$ near $0$.\nthus for $n>0, \\exists c\\in(0,1)\\;:\\;$</p>\n\n<p>$$f(1)=f(0)+\\sum_{k=1}^n\\frac{1}{k!}f^{(k)}(0)+\\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>\n\n<p>but $f^{(n+1)}(c)=(-1)^{n}\\frac{n!}{(1+c)^{n+1}}$</p>\n\n<p>and $0<\\frac{1}{1+c}<1$ , so</p>\n\n<p>$$\\lim_{n\\to+\\infty}\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{k}=f(1)=\\ln(2).$$</p>\n"
},
{
"answer_id": 2086464,
"score": 0,
"text": "<p>Fom the Taylor expansion of the natural logarithm </p>\n\n<p>$$\\ln(1+x)=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}nx^n$$</p>\n\n<p>With $x=1$, $\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}n=\\ln(2)$</p>\n"
}
] | Question: Proving that $\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} = \log 2$
Answer 1: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 2: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 3: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 4: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 5: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 6: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 7: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 8: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 9: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 10: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 11: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 12: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 13: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 14: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 15: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 16: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 17: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 18: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 19: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 20: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 21: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 22: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 23: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 24: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
Answer 25: <p>Take the integral from $0$ to $1$ on both sides of the equation, yielding</p>
<p>$$1 - \frac{1}{2} + \frac{1}{3} -\cdots + (-1)^n\frac{1}{n+1} + (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = \log 2$$</p>
<p>We want to show </p>
<p>$$\lim_{n\to\infty} (-1)^{n+1}\int_0^1 \frac{t^{n+1}}{1+t}\, dt = 0\tag{*}$$</p>
<p>To do so, we use the inequality $1 + t \ge 1$ for $t \ge 0$ to obtain </p>
<p>$$\left\lvert (-1)^{n-1} \int_0^1 \frac{t^{n+1}}{1+t}\, dt\right\rvert = \int_0^1 \frac{t^{n+1}}{1+t}\, dt \le \int_0^1 t^{n+1}\, dt = \frac{1}{n+2}$$</p>
<p>Since $\lim_{n\to \infty} \frac{1}{n+2} = 0$, (*) holds.</p>
Answer 26: <p><strong>hint</strong></p>
<p>Let $f(x)=\ln(1+x)$.
$f$ is $C^{\infty}$ near $0$.
thus for $n>0, \exists c\in(0,1)\;:\;$</p>
<p>$$f(1)=f(0)+\sum_{k=1}^n\frac{1}{k!}f^{(k)}(0)+\frac{1}{(n+1)!}f^{(n+1)}(c)$$</p>
<p>but $f^{(n+1)}(c)=(-1)^{n}\frac{n!}{(1+c)^{n+1}}$</p>
<p>and $0<\frac{1}{1+c}<1$ , so</p>
<p>$$\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}=f(1)=\ln(2).$$</p>
Answer 27: <p>Fom the Taylor expansion of the natural logarithm </p>
<p>$$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$</p>
<p>With $x=1$, $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$</p>
|
Determine whether the following sequence is increasing or decreasing $\frac{n^2+2n+1}{3n^2+n}$ | 2,086,547 | 5 | https://math.stackexchange.com/questions/2086547/determine-whether-the-following-sequence-is-increasing-or-decreasing-fracn2 | <p>Determine whether the following sequence is increasing or decreasing: </p>
<p>$$\frac{n^2+2n+1}{3n^2+n}$$</p>
<p>I'm not sure whether my solution is correct:</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n(n+2)+1}{n(3n+1)}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)}.$$
Let's prove $\frac{n+2}{3n+1}$ is a decreasing sequence.</p>
<p>$$a_n>a_{n+1} \Leftrightarrow \frac{n+2}{3n+1}>\frac{n+3}{3n+4}\Leftrightarrow(n+2)(3n+4)>(n+3)(3n+1)\Leftrightarrow3n^2+10n+8>3n^2+10n+3\Leftrightarrow 8>3$$</p>
<p>So $\frac{n+2}{3n+1}$ is a decreasing sequence and we know that $\frac{1}{n(3n+1)}$ is also decreasing so our given sequence is a decreasing sequence as a sum of $2$ decreasing sequences.</p>
| [
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
},
{
"answer_id": 2086559,
"score": 5,
"text": "<p>Let $$a_n=\\frac{n^2+2n+1}{3n^2+n}$$ Then\n$$a_{n+1}-a_n=\\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>\n"
},
{
"answer_id": 2086554,
"score": 4,
"text": "<p>Your solution looks good.</p>\n\n<p>Another approach could be:</p>\n\n<p>$$a_n=\\frac{3n^2+n}{(n+1)^2}=\\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\\left[\\frac{5n}{(n+1)^2}+\\frac{3}{(n+1)^2}\\right]=3-\\left[\\frac{5}{n+2+\\frac{1}{n}}+\\frac{3}{(n+1)^2}\\right]$$</p>\n\n<p>$a_n$ is increasing so what can we conclude about $\\frac{1}{a_n}$?</p>\n"
},
{
"answer_id": 2086566,
"score": 3,
"text": "<p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>\n"
},
{
"answer_id": 2086596,
"score": 3,
"text": "<p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>\n\n<p>We start with the decomposition in the OP as expressed by</p>\n\n<p>$$\\frac{n^2+2n+1}{3n^2+n}=\\frac{n+2}{3n+1}+\\frac{1}{n(3n+1)} \\tag 1$$</p>\n\n<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>\n\n<p>$$\\frac{n+2}{3n+1}=\\frac13 \\frac{3n+6}{3n+1}=\\frac13 \\left(1+ \\frac{5}{3n+1}\\right) \\tag 2$$</p>\n\n<p>from which we see by inspection that $\\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>\n"
},
{
"answer_id": 2086591,
"score": 1,
"text": "<p>After breaking up the fraction using <a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">Partial Fractions</a>, we see that $\\frac3n$ is bigger than $\\frac4{3n+1}$, so we give $\\frac4{3n}$ of $\\frac3n$ to $-\\frac4{3n+1}$ to make it positive, but decreasing.\n$$\n\\begin{align}\n\\frac{n^2+2n+1}{3n^2+n}\n&=\\frac13\\left(1+\\frac{5n+3}{3n^2+n}\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac3n\\right)\\\\\n&=\\frac13\\left(1-\\frac4{3n+1}+\\frac4{3n}+\\frac5{3n}\\right)\\\\\n&=\\frac13\\left(1+\\frac4{3n(3n+1)}+\\frac5{3n}\\right)\\tag{1}\n\\end{align}\n$$\nFor $n\\gt0$, each non-constant term in $(1)$ is decreasing.</p>\n"
}
] | Question: Determine whether the following sequence is increasing or decreasing $\frac{n^2+2n+1}{3n^2+n}$
Answer 1: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 2: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 3: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 4: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 5: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 6: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 7: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 8: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 9: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 10: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 11: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 12: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 13: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 14: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 15: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 16: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 17: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 18: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 19: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 20: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 21: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 22: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 23: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 24: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 25: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 26: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 27: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 28: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 29: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 30: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 31: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 32: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 33: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 34: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 35: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
Answer 36: <p>Let $$a_n=\frac{n^2+2n+1}{3n^2+n}$$ Then
$$a_{n+1}-a_n=\frac{-5n^2-11n-4}{(3n^2+7n+4)(3n^2+n)}<0$$</p>
Answer 37: <p>Your solution looks good.</p>
<p>Another approach could be:</p>
<p>$$a_n=\frac{3n^2+n}{(n+1)^2}=\frac{3(n+1)^2-5n-3}{(n+1)^2}=3-\left[\frac{5n}{(n+1)^2}+\frac{3}{(n+1)^2}\right]=3-\left[\frac{5}{n+2+\frac{1}{n}}+\frac{3}{(n+1)^2}\right]$$</p>
<p>$a_n$ is increasing so what can we conclude about $\frac{1}{a_n}$?</p>
Answer 38: <p>HINT Find the difference (an+1 - an) and study the sign of this difference. If it is positive, the sequence is increasing, otherwise it is decreasing.</p>
Answer 39: <p>You're solution is fine. Here is another, perhaps more efficient way forward.</p>
<p>We start with the decomposition in the OP as expressed by</p>
<p>$$\frac{n^2+2n+1}{3n^2+n}=\frac{n+2}{3n+1}+\frac{1}{n(3n+1)} \tag 1$$</p>
<p>Then, we simply note that the first term on the right-hand side of $(1)$ can be written as</p>
<p>$$\frac{n+2}{3n+1}=\frac13 \frac{3n+6}{3n+1}=\frac13 \left(1+ \frac{5}{3n+1}\right) \tag 2$$</p>
<p>from which we see by inspection that $\frac{n+2}{3n+1}$ is decreasing. And we are done!</p>
Answer 40: <p>After breaking up the fraction using <a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">Partial Fractions</a>, we see that $\frac3n$ is bigger than $\frac4{3n+1}$, so we give $\frac4{3n}$ of $\frac3n$ to $-\frac4{3n+1}$ to make it positive, but decreasing.
$$
\begin{align}
\frac{n^2+2n+1}{3n^2+n}
&=\frac13\left(1+\frac{5n+3}{3n^2+n}\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac3n\right)\\
&=\frac13\left(1-\frac4{3n+1}+\frac4{3n}+\frac5{3n}\right)\\
&=\frac13\left(1+\frac4{3n(3n+1)}+\frac5{3n}\right)\tag{1}
\end{align}
$$
For $n\gt0$, each non-constant term in $(1)$ is decreasing.</p>
|
Solve floor equation over real numbers: $\lfloor x \rfloor + \lfloor -x \rfloor = \lfloor \log x \rfloor$ | 2,087,192 | 5 | https://math.stackexchange.com/questions/2087192/solve-floor-equation-over-real-numbers-lfloor-x-rfloor-lfloor-x-rfloor | <p>Consider : $\lfloor x \rfloor + \lfloor -x \rfloor = \lfloor \log x \rfloor$. How we can solve it over real numbers?</p>
<p>My try : I tried to solve it in several intervals but didn't get any result.</p>
<p>Please Help! </p>
| [
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
},
{
"answer_id": 2087198,
"score": 12,
"text": "<p>Hint:</p>\n\n<p>If $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>If $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>Case 1) $$\\lfloor \\ln x \\rfloor=0$$\n$$0\\le \\ln x<1$$\n$$1\\le x<e$$\n$$x \\in \\{1,2\\}$$</p>\n\n<p>Case 2) $$\\lfloor \\ln x \\rfloor=-1$$\n$$-1\\le \\ln x<0$$\n$$\\frac 1e\\le x<1 $$</p>\n"
},
{
"answer_id": 2087202,
"score": 3,
"text": "<p>We know that if $x \\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =0$$</p>\n\n<p>And if $x \\not\\in \\mathbb Z$\n$$\\lfloor x \\rfloor + \\lfloor -x \\rfloor =-1$$</p>\n\n<p>So we can easily see the solutions are $x \\in \\mathbb Z \\in [1,e) $ and $x \\not\\in \\mathbb Z \\in [\\frac {1}{e},1) $. Hope it helps. </p>\n"
},
{
"answer_id": 2087201,
"score": 2,
"text": "<p>Hint:</p>\n\n<p>note that:</p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=0\n$\nif $x \\in \\mathbb{Z}$, and </p>\n\n<p>$\n\\lfloor x\\rfloor+\\lfloor -x\\rfloor=-1\n$\nif $x$ is not an integer.</p>\n"
},
{
"answer_id": 2087203,
"score": 2,
"text": "<p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>\n\n<p>So find integer values s.t. $0 \\le \\log x < 1$ and find all non-integers s.t. $-1 \\le \\log x < 0$</p>\n"
},
{
"answer_id": 2087207,
"score": 2,
"text": "<p>The range of $[x]+[-x]$ will be $\\{-1,0\\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>\n\n<p>$[\\log x]$ should be $\\{-1,0\\}$ $\\Rightarrow$ $\\log x$ can take values $[-1,1)$ $\\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>\n\n<p>Therefore the solution is $[e^{-1},1]\\bigcup\\{2\\}$.</p>\n"
}
] | Question: Solve floor equation over real numbers: $\lfloor x \rfloor + \lfloor -x \rfloor = \lfloor \log x \rfloor$
Answer 1: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 2: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 3: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 4: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 5: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 6: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 7: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 8: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 9: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 10: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 11: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 12: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 13: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 14: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 15: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 16: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 17: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 18: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 19: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 20: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 21: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 22: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 23: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 24: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 25: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 26: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 27: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 28: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 29: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 30: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 31: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 32: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 33: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 34: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 35: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
Answer 36: <p>Hint:</p>
<p>If $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>If $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>Case 1) $$\lfloor \ln x \rfloor=0$$
$$0\le \ln x<1$$
$$1\le x<e$$
$$x \in \{1,2\}$$</p>
<p>Case 2) $$\lfloor \ln x \rfloor=-1$$
$$-1\le \ln x<0$$
$$\frac 1e\le x<1 $$</p>
Answer 37: <p>We know that if $x \in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =0$$</p>
<p>And if $x \not\in \mathbb Z$
$$\lfloor x \rfloor + \lfloor -x \rfloor =-1$$</p>
<p>So we can easily see the solutions are $x \in \mathbb Z \in [1,e) $ and $x \not\in \mathbb Z \in [\frac {1}{e},1) $. Hope it helps. </p>
Answer 38: <p>Hint:</p>
<p>note that:</p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=0
$
if $x \in \mathbb{Z}$, and </p>
<p>$
\lfloor x\rfloor+\lfloor -x\rfloor=-1
$
if $x$ is not an integer.</p>
Answer 39: <p>For each integer value the LHS equals $0$, for each non-integer value it's $-1$.</p>
<p>So find integer values s.t. $0 \le \log x < 1$ and find all non-integers s.t. $-1 \le \log x < 0$</p>
Answer 40: <p>The range of $[x]+[-x]$ will be $\{-1,0\}$. If $x$ is integer then $0$, if not integer then $-1$.</p>
<p>$[\log x]$ should be $\{-1,0\}$ $\Rightarrow$ $\log x$ can take values $[-1,1)$ $\Rightarrow$ $x$ can take values from $[e^{-1},e)$.</p>
<p>Therefore the solution is $[e^{-1},1]\bigcup\{2\}$.</p>
|
Which R to use to indicate the real numbers? | 2,087,252 | 5 | https://math.stackexchange.com/questions/2087252/which-r-to-use-to-indicate-the-real-numbers | <p>In general, when people are talking about set of real numbers, they usually use one of the following notations:</p>
<blockquote>
<p>$$\tag{1} x \in R.$$
$$\tag{2} x \in \textbf{R}.$$
$$\tag{3} x \in \mathbb{R}.$$
$$\tag{4} x \in \Re.$$</p>
</blockquote>
<p>I understand that it might be convient to use one notation over the other (e.g. when the letter $R$ assigned a different meaning already, like a region of integration).</p>
<p>However I am curious whether there is an actual distinction between the notation, like a historical reason, or is it just preference which notation to use?</p>
<p>The last one seems to be used quite often as wel..</p>
| [
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
},
{
"answer_id": 2087260,
"score": 12,
"text": "<p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>\n\n<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>\n\n<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\\mathbb{R}$ for <em>real numbers</em>, $\\mathbb{C}$ for <em>complex numbers</em>, $\\mathbb{Q}$ for <em>rationals</em>, $\\mathbb{Z}$ for <em>integers</em>, $\\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>\n\n<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>\n"
},
{
"answer_id": 2087253,
"score": 7,
"text": "<p>it is $$\\mathbb{R}$$ the right sign</p>\n"
},
{
"answer_id": 2087255,
"score": 3,
"text": "<p>I prefer $\\mathbf{R}$ in print, leaving $\\mathbb{R}$ for writing on paper or blackboard.</p>\n\n<p>It's an aesthetic choice,\n also made by Serre and Knuth for instance:</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Blackboard_bold\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>\n\n<p>See a discussion at</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ\" rel=\"nofollow noreferrer\">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>\n"
}
] | Question: Which R to use to indicate the real numbers?
Answer 1: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 2: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 3: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 4: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 5: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 6: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 7: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 8: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 9: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 10: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 11: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 12: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 13: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 14: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 15: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 16: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 17: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 18: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 19: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 20: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 21: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
Answer 22: <p><strong>Nr. (1)</strong> usually used for a region in analysis, and is a ring in algebra also,</p>
<p><strong>Nr. (2)</strong> not conventional in new mathematical books for reals. Some people use <em>Bold</em> characters for vectors, like <strong>A, B, ...</strong></p>
<p><strong>Nr. (3)</strong> is the best character, preferred by many mathematicians. This character specific number sets like $\mathbb{R}$ for <em>real numbers</em>, $\mathbb{C}$ for <em>complex numbers</em>, $\mathbb{Q}$ for <em>rationals</em>, $\mathbb{Z}$ for <em>integers</em>, $\mathbb{N}$ for <em>natural numbers</em>, and so on. It is the best character for this purpose.</p>
<p><strong>Nr. (4)</strong> designed for real part of a complex number in computer editors, but mathematicians don't used it vastly. They used bold $Re$ (<strong>Re</strong>) instead of it.</p>
Answer 23: <p>it is $$\mathbb{R}$$ the right sign</p>
Answer 24: <p>I prefer $\mathbf{R}$ in print, leaving $\mathbb{R}$ for writing on paper or blackboard.</p>
<p>It's an aesthetic choice,
also made by Serre and Knuth for instance:</p>
<p><a href="https://en.wikipedia.org/wiki/Blackboard_bold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Blackboard_bold</a></p>
<p>See a discussion at</p>
<p><a href="https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/sage-devel/aM3wW0AwMoY/KwNSMcPxeLcJ</a></p>
|
For $a+b+c=2$ prove that $2a^ab^bc^c\geq a^2b+b^2c+c^2a$ | 2,087,780 | 5 | https://math.stackexchange.com/questions/2087780/for-abc-2-prove-that-2aabbcc-geq-a2bb2cc2a | <p>Let $a$, $b$ and $c$ be positive numbers such that $a+b+c=2$. Prove that:
$$2a^ab^bc^c\geq a^2b+b^2c+c^2a$$
I tried convexity, but without success:</p>
<p>We need to prove that
$$\ln2+\sum_{cyc}a\ln{a}\geq\ln\sum_{cyc}a^2b$$ and since $f(x)=x\ln{x}$ is a convex function, by Jensen we obtain:
$$\ln2+\sum_{cyc}a\ln{a}\geq\ln2+3\cdot\frac{2}{3}\ln\frac{2}{3}=\ln\frac{8}{9}.$$
Thus, we need to prove that
$$\frac{8}{9}\geq\sum_{cyc}a^2b$$ or
$$(a+b+c)^3\geq9(a^2b+b^2c+c^2a),$$ which is wrong for $c\rightarrow0^+$.</p>
<p>The equality occurs for $a=b=c=\frac{2}{3}$.</p>
| [
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
},
{
"answer_id": 2864456,
"score": 0,
"text": "<p>Hints :In fact the key of this inequality is the following lemma :</p>\n\n<blockquote>\n <p>$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c \\geq a^2b+b^2c+c^2a\\geq (a+\\varepsilon)^2b+b^2(c-\\varepsilon)+(a+\\varepsilon)(c-\\varepsilon)^2$$\n With $0<\\varepsilon<c$ and $a+\\varepsilon\\geq c-\\varepsilon $</p>\n</blockquote>\n\n<p>Proof :\nWe have to prove (for the first part)\n$$2(a+\\varepsilon)^{a+\\varepsilon}b^b(c-\\varepsilon)^{c-\\varepsilon}\\geq 2a^a b^b c^c$$\nWich is equivalent to :</p>\n\n<p>$$(a+\\varepsilon)ln(a+\\varepsilon)+(c-\\varepsilon)ln(c-\\varepsilon)\\geq aln(a)+cln(c)$$</p>\n\n<p>Now we use Niculescu's inequality to solve this :</p>\n\n<blockquote>\n <p>Let $a,b,c,d$ be real positive numbers such that$$ a\\geq c , b\\leq d ,c\\geq d $$\n And $f$ be a convex function we have :</p>\n</blockquote>\n\n<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>\n\n<p>It's easy to remark that we have :</p>\n\n<p>$$ a+\\varepsilon\\geq a , c\\geq c-\\varepsilon ,a+\\varepsilon\\geq c-\\varepsilon $$\nAnd that $xln(x)$ is convex . \nThe second part is easy to show and now you can build the inequality !</p>\n"
},
{
"answer_id": 3504022,
"score": 0,
"text": "<p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that \n<span class=\"math-container\">$$a\\ln a + b\\ln b + c\\ln c \\ge \\ln \\tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>\n\n<p>We will use the following bounds (their proof is not hard and thus omitted):\n<span class=\"math-container\">$$x\\ln x \\ge f(x) = \\tfrac{2}{3}\\ln \\tfrac{2}{3} + (1 + \\ln \\tfrac{2}{3})(x- \\tfrac{2}{3})\n + \\tfrac{9}{20}(x-\\tfrac{2}{3})^2, \\quad \\forall x\\in (0, 2]$$</span>\nand\n<span class=\"math-container\">$$\\ln\\tfrac{4}{9} + \\tfrac{9}{4}(y - \\tfrac49) \\ge \\ln y, \\quad \\forall y > 0.$$</span></p>\n\n<p>With the bounds above, it suffices to prove that\n<span class=\"math-container\">$$f(a) + f(b) + f(c) \\ge \\ln\\tfrac{4}{9} + \\tfrac{9}{4}(\\tfrac{a^2b + b^2c + c^2a}{2} - \\tfrac49)$$</span>\nwhich is simplified to (by using <span class=\"math-container\">$a+b+c=2$</span>)\n<span class=\"math-container\">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nAfter homogenization, it suffices to prove that\n<span class=\"math-container\">$$(18a^2+18b^2+18c^2)\\tfrac{a+b+c}{2}+16(\\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \\ge 0.$$</span>\nThe Buffalo Way works. WLOG, assume that <span class=\"math-container\">$c = \\min(a, b, c)$</span>.\nLet <span class=\"math-container\">$b = c + s, \\ a = c + t$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>.\nIt suffices to prove that\n<span class=\"math-container\">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\\ge 0.$$</span>\nIt is not hard to prove that <span class=\"math-container\">$11s^3+15s^2t-30st^2+11t^3\\ge 0$</span> for <span class=\"math-container\">$s, t\\ge 0$</span>. We are done.</p>\n"
},
{
"answer_id": 4535332,
"score": 0,
"text": "<p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>\n<p>Let us prove a stronger inequality:\n<span class=\"math-container\">$$2a^a b^b c^c \\ge \\frac{4}{27}(a + b + c)^3 - abc.$$</span>\n(<em>Note</em>: It is well-known that <span class=\"math-container\">$a^2b + b^2c + c^2a \\le \\frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>\n<p>Letting <span class=\"math-container\">$x = \\frac32 a, y = \\frac32 b, z = \\frac32 c$</span>, it suffices to prove that, for all <span class=\"math-container\">$x, y, z > 0$</span> with <span class=\"math-container\">$x + y + z = 3$</span>,\n<span class=\"math-container\">$$3 (x^x y^y z^z)^{2/3} \\ge \\frac{4}{27}(x+y+z)^3 - xyz. $$</span>\n(<em>Note</em>: We have <span class=\"math-container\">$a^a b^b c^c\n= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \\frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>\n<p>Using <span class=\"math-container\">$\\mathrm{e}^u \\ge 1 + u$</span>, we have\n<span class=\"math-container\">$$(x^x y^y z^z)^{2/3}\n= \\mathrm{e}^{\\frac23(x\\ln x + y\\ln y + z\\ln z)}\n\\ge 1 + \\frac23(x\\ln x + y\\ln y + z\\ln z).$$</span></p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$3 + 2x\\ln x + 2y\\ln y + 2z\\ln z \\ge \\frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>\n<p><strong>Fact 1</strong>: It holds that\n<span class=\"math-container\">$u\\ln u \\ge u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\n= -\\frac13 - \\frac12 u + u^2 - \\frac16 u^3$</span>\nfor all <span class=\"math-container\">$u > 0$</span>.<br />\n(The proof is given at the end. <em>Note</em>: The RHS is <span class=\"math-container\">$3$</span>-th order Taylor approximation of <span class=\"math-container\">$u\\ln u$</span> around <span class=\"math-container\">$u = 1$</span>.)</p>\n<p>Using Fact 1, it suffices to prove that\n<span class=\"math-container\">\\begin{align*}\n\t&3 + \\left(-\\frac23 - x + 2x^2 - \\frac13 x^3\\right)\n\t+ \\left(-\\frac23 - y + 2y^2 - \\frac13 y^3\\right)\\\\\n\t&\\quad + \\left(-\\frac23 - z + 2z^2 - \\frac13 z^3\\right)\\\\\n\t&\\ge \\frac{4}{27}(x+y+z)^3 - xyz\n\\end{align*}</span>\nor (using <span class=\"math-container\">$x + y + z = 3$</span>)\n<span class=\"math-container\">$$3 - xy - yz - zx \\ge 0$$</span>\nwhich is true.</p>\n<p>We are done.</p>\n<hr>\n<p><em>Proof of Fact 1</em>:</p>\n<p>It suffices to prove that\n<span class=\"math-container\">$$\\ln u \\ge \\frac{1}{u}\\left(u - 1 + \\frac12(u-1)^2 - \\frac16(u-1)^3\\right).$$</span>\nLet <span class=\"math-container\">$F(u) := \\mathrm{LHS} - \\mathrm{RHS}$</span>. We have\n<span class=\"math-container\">$$F'(u) = \\frac{(u-1)^3}{3u^2}.$$</span>\nWe have <span class=\"math-container\">$F'(u) < 0$</span> on <span class=\"math-container\">$(0, 1)$</span>, and <span class=\"math-container\">$F'(u) > 0$</span> on <span class=\"math-container\">$(0, \\infty)$</span>, and <span class=\"math-container\">$F'(1) = 0$</span>.\nThus, <span class=\"math-container\">$F(u) \\ge F(1) = 0$</span> for all <span class=\"math-container\">$u > 0$</span>.</p>\n<p>We are done.</p>\n"
}
] | Question: For $a+b+c=2$ prove that $2a^ab^bc^c\geq a^2b+b^2c+c^2a$
Answer 1: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 2: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 3: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 4: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 5: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 6: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 7: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 8: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 9: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 10: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 11: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 12: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 13: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 14: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 15: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 16: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 17: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 18: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 19: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 20: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 21: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
Answer 22: <p>Hints :In fact the key of this inequality is the following lemma :</p>
<blockquote>
<p>$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c \geq a^2b+b^2c+c^2a\geq (a+\varepsilon)^2b+b^2(c-\varepsilon)+(a+\varepsilon)(c-\varepsilon)^2$$
With $0<\varepsilon<c$ and $a+\varepsilon\geq c-\varepsilon $</p>
</blockquote>
<p>Proof :
We have to prove (for the first part)
$$2(a+\varepsilon)^{a+\varepsilon}b^b(c-\varepsilon)^{c-\varepsilon}\geq 2a^a b^b c^c$$
Wich is equivalent to :</p>
<p>$$(a+\varepsilon)ln(a+\varepsilon)+(c-\varepsilon)ln(c-\varepsilon)\geq aln(a)+cln(c)$$</p>
<p>Now we use Niculescu's inequality to solve this :</p>
<blockquote>
<p>Let $a,b,c,d$ be real positive numbers such that$$ a\geq c , b\leq d ,c\geq d $$
And $f$ be a convex function we have :</p>
</blockquote>
<p>$$0.5(f(a)+f(b))-f((a+b)0.5)\geq 0.5(f(c)+f(d))-f(0.5(c+d))$$</p>
<p>It's easy to remark that we have :</p>
<p>$$ a+\varepsilon\geq a , c\geq c-\varepsilon ,a+\varepsilon\geq c-\varepsilon $$
And that $xln(x)$ is convex .
The second part is easy to show and now you can build the inequality !</p>
Answer 23: <p><strong>Proof</strong>: By taking logarithm on both sides, it suffices to prove that
<span class="math-container">$$a\ln a + b\ln b + c\ln c \ge \ln \tfrac{a^2b + b^2c + c^2a}{2}.$$</span></p>
<p>We will use the following bounds (their proof is not hard and thus omitted):
<span class="math-container">$$x\ln x \ge f(x) = \tfrac{2}{3}\ln \tfrac{2}{3} + (1 + \ln \tfrac{2}{3})(x- \tfrac{2}{3})
+ \tfrac{9}{20}(x-\tfrac{2}{3})^2, \quad \forall x\in (0, 2]$$</span>
and
<span class="math-container">$$\ln\tfrac{4}{9} + \tfrac{9}{4}(y - \tfrac49) \ge \ln y, \quad \forall y > 0.$$</span></p>
<p>With the bounds above, it suffices to prove that
<span class="math-container">$$f(a) + f(b) + f(c) \ge \ln\tfrac{4}{9} + \tfrac{9}{4}(\tfrac{a^2b + b^2c + c^2a}{2} - \tfrac49)$$</span>
which is simplified to (by using <span class="math-container">$a+b+c=2$</span>)
<span class="math-container">$$18a^2+18b^2+18c^2+16 - 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
After homogenization, it suffices to prove that
<span class="math-container">$$(18a^2+18b^2+18c^2)\tfrac{a+b+c}{2}+16(\tfrac{a+b+c}{2})^3- 45a^2b - 45b^2c - 45c^2a \ge 0.$$</span>
The Buffalo Way works. WLOG, assume that <span class="math-container">$c = \min(a, b, c)$</span>.
Let <span class="math-container">$b = c + s, \ a = c + t$</span> for <span class="math-container">$s, t\ge 0$</span>.
It suffices to prove that
<span class="math-container">$$(18s^2-18st+18t^2)c +11s^3+15s^2t-30st^2+11t^3\ge 0.$$</span>
It is not hard to prove that <span class="math-container">$11s^3+15s^2t-30st^2+11t^3\ge 0$</span> for <span class="math-container">$s, t\ge 0$</span>. We are done.</p>
Answer 24: <p><em>Remark</em>: My first proof does not work for the stronger inequality below.</p>
<p>Let us prove a stronger inequality:
<span class="math-container">$$2a^a b^b c^c \ge \frac{4}{27}(a + b + c)^3 - abc.$$</span>
(<em>Note</em>: It is well-known that <span class="math-container">$a^2b + b^2c + c^2a \le \frac{4}{27}(a + b + c)^3 - abc$</span>.)</p>
<p>Letting <span class="math-container">$x = \frac32 a, y = \frac32 b, z = \frac32 c$</span>, it suffices to prove that, for all <span class="math-container">$x, y, z > 0$</span> with <span class="math-container">$x + y + z = 3$</span>,
<span class="math-container">$$3 (x^x y^y z^z)^{2/3} \ge \frac{4}{27}(x+y+z)^3 - xyz. $$</span>
(<em>Note</em>: We have <span class="math-container">$a^a b^b c^c
= (2/3)^{2(x+y+z)/3}(x^xy^yz^x)^{2/3} = \frac49 (x^xy^yz^x)^{2/3} $</span>.)</p>
<p>Using <span class="math-container">$\mathrm{e}^u \ge 1 + u$</span>, we have
<span class="math-container">$$(x^x y^y z^z)^{2/3}
= \mathrm{e}^{\frac23(x\ln x + y\ln y + z\ln z)}
\ge 1 + \frac23(x\ln x + y\ln y + z\ln z).$$</span></p>
<p>It suffices to prove that
<span class="math-container">$$3 + 2x\ln x + 2y\ln y + 2z\ln z \ge \frac{4}{27}(x+y+z)^3 - xyz.$$</span></p>
<p><strong>Fact 1</strong>: It holds that
<span class="math-container">$u\ln u \ge u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3
= -\frac13 - \frac12 u + u^2 - \frac16 u^3$</span>
for all <span class="math-container">$u > 0$</span>.<br />
(The proof is given at the end. <em>Note</em>: The RHS is <span class="math-container">$3$</span>-th order Taylor approximation of <span class="math-container">$u\ln u$</span> around <span class="math-container">$u = 1$</span>.)</p>
<p>Using Fact 1, it suffices to prove that
<span class="math-container">\begin{align*}
&3 + \left(-\frac23 - x + 2x^2 - \frac13 x^3\right)
+ \left(-\frac23 - y + 2y^2 - \frac13 y^3\right)\\
&\quad + \left(-\frac23 - z + 2z^2 - \frac13 z^3\right)\\
&\ge \frac{4}{27}(x+y+z)^3 - xyz
\end{align*}</span>
or (using <span class="math-container">$x + y + z = 3$</span>)
<span class="math-container">$$3 - xy - yz - zx \ge 0$$</span>
which is true.</p>
<p>We are done.</p>
<hr>
<p><em>Proof of Fact 1</em>:</p>
<p>It suffices to prove that
<span class="math-container">$$\ln u \ge \frac{1}{u}\left(u - 1 + \frac12(u-1)^2 - \frac16(u-1)^3\right).$$</span>
Let <span class="math-container">$F(u) := \mathrm{LHS} - \mathrm{RHS}$</span>. We have
<span class="math-container">$$F'(u) = \frac{(u-1)^3}{3u^2}.$$</span>
We have <span class="math-container">$F'(u) < 0$</span> on <span class="math-container">$(0, 1)$</span>, and <span class="math-container">$F'(u) > 0$</span> on <span class="math-container">$(0, \infty)$</span>, and <span class="math-container">$F'(1) = 0$</span>.
Thus, <span class="math-container">$F(u) \ge F(1) = 0$</span> for all <span class="math-container">$u > 0$</span>.</p>
<p>We are done.</p>
|
Give an indeterminate limit of a function that is always indeterminate with iterated attempts at l'Hopital's Rule. | 2,088,096 | 5 | https://math.stackexchange.com/questions/2088096/give-an-indeterminate-limit-of-a-function-that-is-always-indeterminate-with-iter | <p>I am freshening up on differential calculus, and I came up with a thought experiment in the context of using l'Hopital's rule.</p>
<p>Is there a limit of a function that will always give an indeterminate form regardless of the number of times the numerator and denominator are differentiated?</p>
<p>Answerers can choose any value to be approached, and any function. Single-variable and multi-variable are fine. I also don't mind which indeterminate forms are involved.</p>
| [
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
},
{
"answer_id": 2088101,
"score": 6,
"text": "<p>Here is a fairly trivial limit:</p>\n\n<p>$$\\lim_{x\\to\\infty}\\frac{e^x}{e^x}=1$$</p>\n\n<p>but it is quite obvious L'Hospital's won't go anywhere.</p>\n\n<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>\n"
},
{
"answer_id": 2088097,
"score": 3,
"text": "<p>$$\\lim_{x\\to\\infty}\\tanh x = \\lim_{x\\to\\infty} \\frac{\\sinh x}{\\cosh x}=\\lim_{x\\to\\infty} \\frac{\\exp(x)-\\exp(-x)}{\\exp(x)+\\exp(-x)} =\\lim_{x\\to\\infty} \\frac{\\exp 2x-1}{\\exp 2x+1}$$</p>\n\n<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>\n\n<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\\sinh x$ gives the limit described in a comment by David Mitra, namely $$\\lim_{u\\to\\infty}\\frac{u}{\\sqrt{u^2+1}}$$</p>\n\n<p>which also falls into a loop of period 2.</p>\n"
}
] | Question: Give an indeterminate limit of a function that is always indeterminate with iterated attempts at l'Hopital's Rule.
Answer 1: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 2: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 3: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 4: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 5: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 6: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 7: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 8: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 9: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 10: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 11: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 12: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 13: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 14: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
Answer 15: <p>Here is a fairly trivial limit:</p>
<p>$$\lim_{x\to\infty}\frac{e^x}{e^x}=1$$</p>
<p>but it is quite obvious L'Hospital's won't go anywhere.</p>
<p>In general, L'Hospital's rule won't work if the quotient of derivatives is cyclic, basically repeating itself.</p>
Answer 16: <p>$$\lim_{x\to\infty}\tanh x = \lim_{x\to\infty} \frac{\sinh x}{\cosh x}=\lim_{x\to\infty} \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)} =\lim_{x\to\infty} \frac{\exp 2x-1}{\exp 2x+1}$$</p>
<p>LHR ends up being fairly unhelpful for all 3 of these, falling into a loop for the 2nd and 3rd expressions, and not getting any simpler for the 4th.</p>
<p><strong>Additional note:</strong> Taking the second expression and applying the substitution $u=\sinh x$ gives the limit described in a comment by David Mitra, namely $$\lim_{u\to\infty}\frac{u}{\sqrt{u^2+1}}$$</p>
<p>which also falls into a loop of period 2.</p>
|
How many rational solutions does $(x^2+4x+8)^2+3x(x^2+4x+8)+2x^2$ have? | 2,088,220 | 5 | https://math.stackexchange.com/questions/2088220/how-many-rational-solutions-does-x24x823xx24x82x2-have | <blockquote>
<p>How many rational solutions does $(x^2+4x+8)^2+3x(x^2+4x+8)+2x^2$ have?</p>
</blockquote>
<p>I don't know how to start...</p>
| [
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
},
{
"answer_id": 2088227,
"score": 5,
"text": "<p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>\n"
},
{
"answer_id": 2088232,
"score": 5,
"text": "<p>Let's start by applying the quadratic formula.</p>\n\n<p>We have: $(x^{2}+4x+8)^{2}+3x\\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=\\dfrac{-3x\\pm\\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>\n\n<p>$\\hspace{27.75 mm} =\\dfrac{-3x\\pm{x}}{2}$</p>\n\n<p>$\\hspace{27.75 mm} =-2x,-x$</p>\n\n<p>Then,</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-2x$</p>\n\n<p>$\\Rightarrow x^{2}+6x+8=0$</p>\n\n<p>$\\Rightarrow x^{2}+2x+4x+8=0$</p>\n\n<p>$\\Rightarrow x\\hspace{1 mm}(x+2)+4\\hspace{1 mm}(x+2)=0$</p>\n\n<p>$\\Rightarrow (x+2)(x+4)=0$</p>\n\n<p>$\\Rightarrow x=-2,-4$</p>\n\n<p>or</p>\n\n<p>$\\Rightarrow x^{2}+4x+8=-x$</p>\n\n<p>$\\Rightarrow x^{2}+5x+8=0$</p>\n\n<p>$\\Rightarrow x=\\dfrac{-5\\pm\\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{-7}}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5\\pm\\sqrt{7}i}{2}$</p>\n\n<p>$\\hspace{9 mm} =\\dfrac{-5-\\sqrt{7}i}{2},\\dfrac{-5+\\sqrt{7}i}{2}$</p>\n\n<p>However, $x\\in\\mathbb{Q}$.</p>\n\n<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>\n"
},
{
"answer_id": 2088237,
"score": 5,
"text": "<p>We can still use the classic \"complete square\" trick to do this without having to use the \"double\" variables as shown by the first answer. To this end, we have: $\\left((x^2+4x+8) + \\dfrac{3x}{2}\\right)^2 - \\dfrac{9x^2}{4}+2x^2=0\\implies \\left((x^2+4x+8)+\\dfrac{3x}{2}\\right)^2 = \\left(\\dfrac{x}{2}\\right)^2$. Now using the familiar formula $a^2 = b^2 \\implies a = \\pm b$ to settle the answer.</p>\n"
}
] | Question: How many rational solutions does $(x^2+4x+8)^2+3x(x^2+4x+8)+2x^2$ have?
Answer 1: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 2: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 3: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 4: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 5: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 6: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 7: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 8: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 9: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 10: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 11: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 12: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 13: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 14: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 15: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 16: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 17: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 18: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 19: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 20: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 21: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
Answer 22: <p>Let $ y=x^2+4x+8 $. Your equation becomes $ 0=y^2+3yx+2x^2=(y+x)(y+2x)$. So you only have to solve 2 quadratic equations. Solve them and you'll see the answer is 2</p>
Answer 23: <p>Let's start by applying the quadratic formula.</p>
<p>We have: $(x^{2}+4x+8)^{2}+3x\hspace{1 mm}(x^{2}+4x+8)+2x^{2}=0$</p>
<p>$\Rightarrow x^{2}+4x+8=\dfrac{-3x\pm\sqrt{(3x)^{2}-4(1)(2x^{2})}}{2(1)}$</p>
<p>$\hspace{27.75 mm} =\dfrac{-3x\pm{x}}{2}$</p>
<p>$\hspace{27.75 mm} =-2x,-x$</p>
<p>Then,</p>
<p>$\Rightarrow x^{2}+4x+8=-2x$</p>
<p>$\Rightarrow x^{2}+6x+8=0$</p>
<p>$\Rightarrow x^{2}+2x+4x+8=0$</p>
<p>$\Rightarrow x\hspace{1 mm}(x+2)+4\hspace{1 mm}(x+2)=0$</p>
<p>$\Rightarrow (x+2)(x+4)=0$</p>
<p>$\Rightarrow x=-2,-4$</p>
<p>or</p>
<p>$\Rightarrow x^{2}+4x+8=-x$</p>
<p>$\Rightarrow x^{2}+5x+8=0$</p>
<p>$\Rightarrow x=\dfrac{-5\pm\sqrt{5^{2}-4(1)(8)}}{2(1)}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{-7}}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5\pm\sqrt{7}i}{2}$</p>
<p>$\hspace{9 mm} =\dfrac{-5-\sqrt{7}i}{2},\dfrac{-5+\sqrt{7}i}{2}$</p>
<p>However, $x\in\mathbb{Q}$.</p>
<p>Therefore, the solutions to the equation are $x=-2$ and $x=-4$.</p>
Answer 24: <p>We can still use the classic "complete square" trick to do this without having to use the "double" variables as shown by the first answer. To this end, we have: $\left((x^2+4x+8) + \dfrac{3x}{2}\right)^2 - \dfrac{9x^2}{4}+2x^2=0\implies \left((x^2+4x+8)+\dfrac{3x}{2}\right)^2 = \left(\dfrac{x}{2}\right)^2$. Now using the familiar formula $a^2 = b^2 \implies a = \pm b$ to settle the answer.</p>
|
How does the empty set work in arithmetic of surreal numbers? | 2,088,224 | 5 | https://math.stackexchange.com/questions/2088224/how-does-the-empty-set-work-in-arithmetic-of-surreal-numbers | <p>I'm working my way through <em>Surreal Numbers</em> by Knuth, and am finding myself a little hung up on the explanation of how addition works. The rule for addition is given as:</p>
<p>$$
x + y = ((X_L+y)\cup(Y_L+x), (X_R+y)\cup(Y_R+x))
$$</p>
<p>On the next page, one of the protagonists works out $1+1$ to be:</p>
<p>$$
(\{0+1,0+1\},\emptyset)
$$
and follows the reasoning on through to prove that $1+1=2$. Where I'm getting a bit lost is that it seems there should be some additions involving $\emptyset$ that are omitted without explanation. If $1$ is defined as $(0,\emptyset)$, then a true following of the addition rule seems like it should be:</p>
<p>$$
1+1=((0+1)\cup(0+1), (\emptyset+1)\cup(\emptyset+1))
$$
$$
1+1=(1,1)
$$</p>
<p>All of this is obviously wrong, but it's not clear to me why $\emptyset+1$ isn't $1$. I guess the most basic version of my question is: why is $0$ distinct from $\emptyset$ in surreal numbers, and what is the nature of that distinction?</p>
| [
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
},
{
"answer_id": 2088259,
"score": 5,
"text": "<p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \\in X_R$. But the definition $1 = (\\{0\\}, \\emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>\n"
},
{
"answer_id": 2088423,
"score": 4,
"text": "<p>$1$ is not really defined as $(0,\\emptyset)$, but as $(\\{0\\},\\emptyset)$. Then $$1+1=\\left(\\{0+1\\}\\cup\\{0+1\\},(\\emptyset+1)\\cup(\\emptyset+1)\\right)$$\nHowever, $\\emptyset+1$ is empty since there are no surreals in $\\emptyset$ to add to $1$ ($\\emptyset+1$ is the set of $x+1$ for all $x\\in\\emptyset$), so $1+1$ simplifies to $$\\left(\\{0+1\\}\\cup\\{0+1\\},\\emptyset\\cup\\emptyset\\right)=\\left(\\{0+1\\},\\emptyset\\right)\\text{.}$$</p>\n\n<p>If you know that $0+1$ is $1$, we have $1+1=(\\{1\\},\\emptyset)$.</p>\n\n<p>To answer your other question, $0$ is distinct from $\\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\\emptyset,\\emptyset)$.</p>\n"
}
] | Question: How does the empty set work in arithmetic of surreal numbers?
Answer 1: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 2: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 3: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 4: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 5: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 6: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 7: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 8: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 9: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 10: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 11: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 12: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 13: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 14: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
Answer 15: <p>A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.</p>
Answer 16: <p>$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$
However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$</p>
<p>If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.</p>
<p>To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.</p>
|
How do we show that $\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\left({\pi\over 6}\right)^2$ | 2,088,517 | 5 | https://math.stackexchange.com/questions/2088517/how-do-we-show-that-int-01-arctanx-sqrtx22-over-sqrtx22-cdot | <p>I was going through the the top votes questions and I saw this quite of interesting <a href="https://math.stackexchange.com/questions/580521/generalizing-int-01-frac-arctan-sqrtx2-2-sqrtx2-2">post</a> posted by @Sangchul Lee. I was just messing around with it and found a slightest Variation of Ahmed's integral.</p>
<p><span class="math-container">$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\left({\pi\over 6}\right)^2\tag1$$</span></p>
<p>May be substitution might work</p>
<p><span class="math-container">$u=x(x^2+2)^{1/2}$</span> the <span class="math-container">$du=x(x^2+2)^{-1/2}+(x^2+2)^{1/2}dx$</span></p>
<p><span class="math-container">$$\int_{\sqrt{2}}^{4\sqrt{3}\over3}{x^2\arctan{u}\over x^3+u^2}\cdot{x^2\over u^2-x^2}du$$</span></p>
<p>More difficult then before. I can't remove the x variable from the integral.</p>
<p>Any suggestion on how to prove <span class="math-container">$(1)$</span>?</p>
| [
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
},
{
"answer_id": 2088525,
"score": 7,
"text": "<p>Substitute : $u=\\arctan\\left(x\\sqrt{x^2+2}\\right)$</p>\n\n<p>You will get : $$\\dfrac{\\mathrm{d}u}{\\mathrm{d}x}=\\dfrac{\\sqrt{x^2+2}+\\frac{x^2}{\\sqrt{x^2+2}}}{x^2\\left(x^2+2\\right)+1}$$</p>\n\n<p>Thus :\n$$\\int_{0}^{1}{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int_{0}^{\\arctan(\\sqrt{3})}{u\\over 2}\\cdot{du}=\\left[\\frac{u^2}{4}\\right]^{\\frac{\\pi}{3}}_0=\\left({\\pi\\over 6}\\right)^2$$</p>\n"
},
{
"answer_id": 2088529,
"score": 5,
"text": "<p>Using your approach $$u=x(x^2+2)^{1/2}\\implies x=\\sqrt{\\sqrt{u^2+1}-1}\\implies dx=\\frac{u}{2 \\sqrt{u^2+1} \\sqrt{\\sqrt{u^2+1}-1}}\\,du$$ and, after simplifications $$\\int{\\arctan(x\\sqrt{x^2+2})\\over \\sqrt{x^2+2}}\\cdot{dx\\over x^2+1}=\\int\\frac{\\tan ^{-1}(u)}{2 u^2+2}\\,du=\\frac{1}{4} \\tan ^{-1}(u)^2$$</p>\n"
}
] | Question: How do we show that $\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\left({\pi\over 6}\right)^2$
Answer 1: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 2: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 3: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 4: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 5: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 6: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 7: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 8: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 9: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 10: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 11: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 12: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 13: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 14: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
Answer 15: <p>Substitute : $u=\arctan\left(x\sqrt{x^2+2}\right)$</p>
<p>You will get : $$\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{\sqrt{x^2+2}+\frac{x^2}{\sqrt{x^2+2}}}{x^2\left(x^2+2\right)+1}$$</p>
<p>Thus :
$$\int_{0}^{1}{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int_{0}^{\arctan(\sqrt{3})}{u\over 2}\cdot{du}=\left[\frac{u^2}{4}\right]^{\frac{\pi}{3}}_0=\left({\pi\over 6}\right)^2$$</p>
Answer 16: <p>Using your approach $$u=x(x^2+2)^{1/2}\implies x=\sqrt{\sqrt{u^2+1}-1}\implies dx=\frac{u}{2 \sqrt{u^2+1} \sqrt{\sqrt{u^2+1}-1}}\,du$$ and, after simplifications $$\int{\arctan(x\sqrt{x^2+2})\over \sqrt{x^2+2}}\cdot{dx\over x^2+1}=\int\frac{\tan ^{-1}(u)}{2 u^2+2}\,du=\frac{1}{4} \tan ^{-1}(u)^2$$</p>
|
A line intersects a hyperbola at the points $(-2,-6)$ and $(4,2)$ and one of the asymptotes of the hyperbola at $(1,-2)$. Find the centre. | 2,088,842 | 5 | https://math.stackexchange.com/questions/2088842/a-line-intersects-a-hyperbola-at-the-points-2-6-and-4-2-and-one-of-the | <blockquote>
<p>A line intersects a hyperbola at the points $(-2,-6)$ and $(4,2)$. It also intersects one of the asymptotes of the hyperbola at the point $(1,-2)$.<br>
Find the centre of the hyperbola. </p>
</blockquote>
<p>My approach:</p>
<p>The mid-point of the two points of contact $(-2,-6)$ and $(4,2)$ is the point $(1,-2)$ and we are given that the line cuts the asymptote at this point. Hence the answer is $(1,-2)$. But this seems intuitive and I am not sure whether my reasoning is correct. I am looking for another approach to the problem. </p>
| [
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
},
{
"answer_id": 2791200,
"score": 3,
"text": "<p>Your intuition can be stated as the proposition below:</p>\n\n<blockquote>\n <p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>\n</blockquote>\n\n<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 1$.</p>\n\n<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$\n\\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = 1, \\quad \\frac{x_2^2}{a^2} - \\frac{y_2^2}{b^2} = 1. \\tag{1}\n$$\nBecause the asymptotes are $\\dfrac{x^2}{a^2} - \\dfrac{y^2}{b^2} = 0$ and $\\left( \\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)$ lies on the asymptotes, then$$\n\\frac{(x_1 + x_2)^2}{a^2} - \\frac{(y_1 + y_2)^2}{b^2} = 0. \\tag{2}\n$$</p>\n\n<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$\n\\frac{x_1^2 - x_2^2}{a^2} = \\frac{y_1^2 - y_2^2}{b^2},\n$$\nand from (2) there is$$\n\\frac{(x_1 + x_2)^2}{a^2} = \\frac{(y_1 + y_2)^2}{b^2},\n$$\nthus\\begin{align*}\n&\\mathrel{\\phantom{\\Longrightarrow}}{} \\frac{x_1 - x_2}{x_1 + x_2} = \\frac{\\dfrac{x_1^2 - x_2^2}{a^2}}{\\dfrac{(x_1 + x_2)^2}{a^2}} = \\frac{\\dfrac{y_1^2 - y_2^2}{b^2}}{\\dfrac{(y_1 + y_2)^2}{b^2}} = \\frac{y_1 - y_2}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{2x_1}{x_1 + x_2} = \\frac{x_1 - x_2}{x_1 + x_2} + 1 = \\frac{y_1 - y_2}{y_1 + y_2} + 1 = \\frac{2y_1}{y_1 + y_2}\\\\\n&\\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_1 + y_2}{x_1 + x_2} \\Longrightarrow \\frac{y_1}{x_1} = \\frac{y_2}{x_2} := c.\n\\end{align*}\nNote that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\\dfrac{1}{a^2} - \\dfrac{c^2}{b^2} = 0$, then by (1) there is$$\n1 = \\frac{x_1^2}{a^2} - \\frac{y_1^2}{b^2} = \\left( \\frac{1}{a^2} - \\frac{c^2}{b^2} \\right) x_1^2 = 0,\n$$\na contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>\n"
},
{
"answer_id": 2791217,
"score": 3,
"text": "<p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then\n\\begin{align}\n\\underbrace{\\left(\\frac{u_1-u_2}{2}\\right)^TA\\left(\\frac{u_1-u_2}{2}\\right)}_{P_1}+\\underbrace{\\left(\\frac{u_1+u_2}{2}\\right)^TA\\left(\\frac{u_1+u_2}{2}\\right)}_{P_2}=1~~,\\tag{1}\n\\end{align}\nwhich implies that if $P_2$ above is zero, then $P_1=1$.</p>\n\n<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>\n\n<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>\n\n<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>\n"
}
] | Question: A line intersects a hyperbola at the points $(-2,-6)$ and $(4,2)$ and one of the asymptotes of the hyperbola at $(1,-2)$. Find the centre.
Answer 1: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 2: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 3: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 4: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 5: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 6: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 7: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 8: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 9: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 10: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 11: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 12: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 13: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 14: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
Answer 15: <p>Your intuition can be stated as the proposition below:</p>
<blockquote>
<p>If $A$ and $B$ are two different points on a hyperbola and the midpoint of $AB$ lies on the asymptotes, then this midpoint is the center of the hyperbola.</p>
</blockquote>
<p><em>Proof:</em> Note that this proposition is irrelevant to coordinate systems, thus without loss of generality, the hyperbola can be assumed to be $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 1$.</p>
<p>Suppose the coordinates of $A$ and $B$ are $(x_1, y_1)$ and $(x_2, y_2)$, respectively, then $x_1, x_2 ≠ 0$ and$$
\frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = 1, \quad \frac{x_2^2}{a^2} - \frac{y_2^2}{b^2} = 1. \tag{1}
$$
Because the asymptotes are $\dfrac{x^2}{a^2} - \dfrac{y^2}{b^2} = 0$ and $\left( \dfrac{x_1 + x_2}{2}, \dfrac{y_1 + y_2}{2} \right)$ lies on the asymptotes, then$$
\frac{(x_1 + x_2)^2}{a^2} - \frac{(y_1 + y_2)^2}{b^2} = 0. \tag{2}
$$</p>
<p>Now, suppose $x_1 + x_2 ≠ 0$, then $y_1 + y_2 ≠ 0$ by (2). From (1) there is$$
\frac{x_1^2 - x_2^2}{a^2} = \frac{y_1^2 - y_2^2}{b^2},
$$
and from (2) there is$$
\frac{(x_1 + x_2)^2}{a^2} = \frac{(y_1 + y_2)^2}{b^2},
$$
thus\begin{align*}
&\mathrel{\phantom{\Longrightarrow}}{} \frac{x_1 - x_2}{x_1 + x_2} = \frac{\dfrac{x_1^2 - x_2^2}{a^2}}{\dfrac{(x_1 + x_2)^2}{a^2}} = \frac{\dfrac{y_1^2 - y_2^2}{b^2}}{\dfrac{(y_1 + y_2)^2}{b^2}} = \frac{y_1 - y_2}{y_1 + y_2}\\
&\Longrightarrow \frac{2x_1}{x_1 + x_2} = \frac{x_1 - x_2}{x_1 + x_2} + 1 = \frac{y_1 - y_2}{y_1 + y_2} + 1 = \frac{2y_1}{y_1 + y_2}\\
&\Longrightarrow \frac{y_1}{x_1} = \frac{y_1 + y_2}{x_1 + x_2} \Longrightarrow \frac{y_1}{x_1} = \frac{y_2}{x_2} := c.
\end{align*}
Note that $x_1 + x_2 ≠ 0$, plugging $y_1 = cx_1$ and $y_2 = cx_2$ into (2) to get $\dfrac{1}{a^2} - \dfrac{c^2}{b^2} = 0$, then by (1) there is$$
1 = \frac{x_1^2}{a^2} - \frac{y_1^2}{b^2} = \left( \frac{1}{a^2} - \frac{c^2}{b^2} \right) x_1^2 = 0,
$$
a contradiction. Therefore, $x_1 + x_2 = 0$, which implies $y_1 + y_2 = 0$ by (2). Hence the midpoint of $AB$ is the center $(0, 0)$ of the hyperbola.</p>
Answer 16: <p>Let $A$ symmetric matrix and let $u_1^TAu_1=u_2^TAu_2=1$. Then
\begin{align}
\underbrace{\left(\frac{u_1-u_2}{2}\right)^TA\left(\frac{u_1-u_2}{2}\right)}_{P_1}+\underbrace{\left(\frac{u_1+u_2}{2}\right)^TA\left(\frac{u_1+u_2}{2}\right)}_{P_2}=1~~,\tag{1}
\end{align}
which implies that if $P_2$ above is zero, then $P_1=1$.</p>
<p>Any hyperbola (rotated or not) centered in the origin can be defined as the points $\vec{x}$ such that $x^TAx=1$ for A symmetric$^*$, and the asymptotes as those point such that $x^TAx=0$. In Equation (1), $u_1$ and $u_2$ are in the hyperbola. Midpoint is $(u_1+u_2)/2$, which being in the asymptote means $P_2=0$. Therefore, $P_1=1$ or $(u_1-u_2)/2$ is in the hyperbola. But if $u_2$ is in the hyperbola (centered in origin) so it is $-u_2$. Therefore, we have that $u_1$, $-u_2$, and $(u_1-u_2)/2$ are in the hyperbola, which is impossible unless $(u_1-u_2)/2=u_1$ or $=-u_2$ (a line intersects hyperbola at most in two points). In either case, $u_1+u_2=0$ or, what is the same, the midpoint between these two points is the origin. </p>
<p>Reasoning above can be taken outside origin taking the equation for the hyperbola as $(x-x_0)^TA(x-x_0)=1$ and the asymptotes $(x-x_0)^TA(x-x_0)=0$. </p>
<p>$^*$ $A$ must have one eigenvalue positive and one negative, but it is not necessary for this proof. </p>
|
Proof that expression divided by opposite equals $-1$ | 2,088,922 | 5 | https://math.stackexchange.com/questions/2088922/proof-that-expression-divided-by-opposite-equals-1 | <p>I'm reviewing some basic algebra and I came across a statement saying that: $$\frac{a - b} {b - a} = -1$$ Plugging in a few values this appears to be true. </p>
<p>However, I have been unable to find a formal proof for this. Neither do I know what is this called? Could someone provide additional information on this matter?</p>
| [
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
},
{
"answer_id": 2088929,
"score": 5,
"text": "<p>We have $$P= \\frac {a-b}{b-a} =\\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>\n"
},
{
"answer_id": 2088945,
"score": 1,
"text": "<p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\\frac{a-b}{b-a}=\\frac{a-b}{-(a-b)}=\\frac1{-1}=-1$$ $$**OR**$$ $$\\frac {a-b}{b-a} =\\frac {-(b-a)}{b-a} =\\frac{-1}1=-1.$$</p>\n"
},
{
"answer_id": 2088926,
"score": 0,
"text": "<p>since we get by the minus sign $$-(a-b)=b-a$$</p>\n"
},
{
"answer_id": 2088935,
"score": 0,
"text": "<p>$$\\frac{(a-b)}{(b-a)}=\\frac{(a-b)}{-(a-b)}=\\frac{1}{-1}=-1$$</p>\n"
}
] | Question: Proof that expression divided by opposite equals $-1$
Answer 1: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 2: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 3: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 4: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 5: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 6: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 7: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 8: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 9: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 10: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 11: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 12: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 13: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 14: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 15: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 16: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 17: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 18: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 19: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 20: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 21: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 22: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 23: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 24: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 25: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 26: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 27: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 28: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
Answer 29: <p>We have $$P= \frac {a-b}{b-a} =\frac {a-b}{-(a-b)} =-1$$ We can also write it as $$P=\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =-1$$ Hope it helps. </p>
Answer 30: <p>It was quite easy you have just to take $(-)$ minus common from numerator or denominator $$\frac{a-b}{b-a}=\frac{a-b}{-(a-b)}=\frac1{-1}=-1$$ $$**OR**$$ $$\frac {a-b}{b-a} =\frac {-(b-a)}{b-a} =\frac{-1}1=-1.$$</p>
Answer 31: <p>since we get by the minus sign $$-(a-b)=b-a$$</p>
Answer 32: <p>$$\frac{(a-b)}{(b-a)}=\frac{(a-b)}{-(a-b)}=\frac{1}{-1}=-1$$</p>
|
Construct a random variable with a given distribution | 2,089,091 | 5 | https://math.stackexchange.com/questions/2089091/construct-a-random-variable-with-a-given-distribution | <p>Suppose that ($\Omega$, $\mathcal{F}$,$P$) where $\mathcal{F}$ is the $\sigma$-algebra of Lebesgue measurable subsets of $\Omega\equiv[0,1]$ and $P$ is the Lebesgue measure. Let $G:\mathbb{R}\to[0,1]$ be an arbitrary distribution function. My questions: </p>
<ol>
<li>Can we always construct $X:\Omega\to\mathbb{R}$ so that $X$ has $G$ as its CDF?</li>
<li>If the answer is 'Yes' to the above, what are other triplets besides the particular $(\Omega,\mathcal{F},P)$ given above that will yield the same answer?</li>
</ol>
<p>If $G$ is continuous and has range $[0,1]$, $X(\omega)=G^{-1}(\omega)$ for $\omega\in[0,1]$ works:
$$
\Pr[X\leq x]=P[\omega:X(\omega)\leq x]=P[\omega:\omega\leq G(x)]=P([0,G(x)])=G(x).
$$
But I don't know how to deal with more general $G$.</p>
| [
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
},
{
"answer_id": 2089592,
"score": 2,
"text": "<p>A necessary and sufficient condition is that $(\\Omega,\\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\\in\\mathcal{F}$such that $P(E)>0$ and for all $F\\subseteq E$ with $F\\in\\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>\n"
},
{
"answer_id": 2097443,
"score": 1,
"text": "<p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href=\"http://projecteuclid.org/euclid.aop/1176995146\" rel=\"nofollow noreferrer\">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>\n"
}
] | Question: Construct a random variable with a given distribution
Answer 1: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 2: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 3: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 4: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 5: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 6: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 7: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 8: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 9: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 10: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 11: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 12: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 13: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 14: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
Answer 15: <p>A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an <em>atomless</em> probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .</p>
Answer 16: <p>I nice reference for the sufficient conditions for 1. to be possible is Lemma 2.3 in <a href="http://projecteuclid.org/euclid.aop/1176995146" rel="nofollow noreferrer">Berkes and Philipp (1979): Approximation Thorems for Independent and Weakly Dependent Random Vectors</a></p>
|
How to integrate $\int\sqrt{\frac{4-x}{4+x}}$? | 2,089,334 | 5 | https://math.stackexchange.com/questions/2089334/how-to-integrate-int-sqrt-frac4-x4x | <p>Let </p>
<p>$$g(x)=\sqrt{\dfrac{4-x}{4+x}}.$$</p>
<p>I would like to find the primitive of $g(x)$, say $G(x)$.</p>
<p>I did the following: first the domain of $g(x)$ is $D_g=(-4, 4]$. Second, we have</p>
<p>\begin{align}
G(x)=\int g(x)dx &=\int\sqrt{\dfrac{4-x}{4+x}}dx\\
&=\int\sqrt{\dfrac{(4-x)(4-x)}{(4+x)(4-x)}}dx\\
&=\int\sqrt{\dfrac{(4-x)^2}{16-x^2}}dx\\
&=\int\dfrac{4-x}{\sqrt{16-x^2}}dx\\
&=\int\dfrac{4}{\sqrt{16-x^2}}dx-\int\dfrac{x}{\sqrt{16-x^2}}dx\\
&=\int\dfrac{4}{\sqrt{16(1-x^2/16)}}dx+\int\dfrac{-2x}{2\sqrt{16-x^2}}dx\\
&=\underbrace{\int\dfrac{1}{\sqrt{1-(x/4)^2}}dx}_{\text{set $t=x/4$}}+\sqrt{16-x^2}+C\\
&=\underbrace{\int\dfrac{4}{\sqrt{1-t^2}}dt}_{\text{set $t=\sin \theta$}}+\sqrt{16-x^2}+C\\
&=\int\dfrac{4\cos\theta}{\sqrt{\cos^2\theta}}d\theta+\sqrt{16-x^2}+C\\
\end{align}</p>
<p>So finally, I get </p>
<p>$$G(x)=\pm\theta+\sqrt{16-x^2}+C'.$$</p>
<p>With wolframalpha I found some different answer. Could you provide any suggestions?</p>
<p>Also, multiplying by $4-x$ is it correct at the beginning? because I should say then that $x\neq 4$.</p>
| [
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2089386,
"score": 4,
"text": "<p>First of all, you are right that there is trouble in multiplying by $\\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \\langle-4,4\\rangle$. </p>\n\n<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \\sin\\theta \\in\\langle -1,1\\rangle$, just choose $\\theta$ to be in $\\langle-\\frac\\pi 2,\\frac\\pi 2\\rangle$ (substitution is natural bijection that way). Then you have that $\\cos\\theta>0$, and thus $\\sqrt{\\cos^2\\theta}=\\cos\\theta$.</p>\n\n<p>So, your final result should be $$4\\theta + \\sqrt{16-x^2} + C = 4\\arcsin\\frac x4 + \\sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>\n\n<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>\n\n<p>$$\\int\\sqrt{\\frac{4-x}{4+x}}\\,dx = \\int\\frac{4-x}{\\sqrt{16-x^2}}\\, dx = [x = 4\\sin t] =\\int \\frac{4-4\\sin t}{4\\cos t}\\cdot 4\\cos t\\, dt=\\\\ = 4(t+\\cos t)+C = 4\\arcsin\\frac x4 +4\\cos(\\arcsin\\frac x4) + C = 4\\arcsin\\frac x4 +4\\sqrt{1-\\frac{x^2}4} + C$$</p>\n"
},
{
"answer_id": 2089377,
"score": 3,
"text": "<p>Solving a more general problem, using integration by parts:</p>\n\n<p>$$\\mathcal{I}_\\text{n}\\left(x\\right)=\\int\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}\\space\\text{d}x=x\\sqrt{\\frac{\\text{n}-x}{\\text{n}+x}}+\\text{n}\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x$$</p>\n\n<p>Now, substitute $\\text{u}=\\text{n}+x$ and $\\text{d}\\text{u}=\\text{d}x$:</p>\n\n<p>$$\\int\\frac{x}{\\left(\\text{n}+x\\right)^2\\sqrt{\\frac{2\\text{n}}{\\text{n}+x}-1}}\\space\\text{d}x=\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>Seperate the integral:</p>\n\n<p>$$\\int\\frac{\\text{u}-\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{\\text{u}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\int\\frac{\\text{n}}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=$$\n$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}-\\text{n}\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}$$</p>\n\n<p>So:</p>\n\n<ol>\n<li>$$\\int\\frac{1}{\\text{u}\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=\\int\\frac{1}{\\sqrt{\\text{u}}\\sqrt{2\\text{n}-\\text{u}}}\\space\\text{d}\\text{u}=2\\arcsin\\left\\{\\frac{\\sqrt{\\text{u}}}{\\sqrt{2}\\sqrt{\\text{n}}}\\right\\}+\\text{C}_1$$</li>\n<li>Substitute $\\text{s}=\\frac{2\\text{n}}{\\text{u}}-1$ and $\\text{d}\\text{s}=-\\frac{2\\text{n}}{\\text{u}^2}\\space\\text{d}\\text{u}$:\n$$\\int\\frac{1}{\\text{u}^2\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}\\space\\text{d}\\text{u}=-\\frac{1}{2\\text{n}}\\int\\frac{1}{\\sqrt{\\text{s}}}\\space\\text{d}\\text{s}=\\text{C}_2-\\frac{\\sqrt{\\frac{2\\text{n}}{\\text{u}}-1}}{\\text{n}}$$</li>\n</ol>\n"
},
{
"answer_id": 2089383,
"score": 3,
"text": "<p>Substitute $x=4\\frac{1-y^2}{1+y^2}$ so that $y=\\sqrt{\\frac{4-x}{4+x}}$:\n$$\n\\begin{align}\n\\int\\sqrt{\\frac{4-x}{4+x}}\\,\\mathrm{d}x\n&=4\\int y\\,\\mathrm{d}\\frac{1-y^2}{1+y^2}\\\\\n&=-16\\int\\frac{y^2}{\\left(1+y^2\\right)^2}\\,\\mathrm{d}y\\\\\n&=-16\\int\\frac{\\tan^2(\\theta)}{\\sec^2(\\theta)}\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\sin^2(\\theta)\\,\\mathrm{d}\\theta\\\\\n&=-16\\int\\frac{1-\\cos(2\\theta)}2\\,\\mathrm{d}\\theta\\\\\n&=-8\\theta+4\\sin(2\\theta)+C\\\\\n&=-8\\theta+8\\frac{\\tan(\\theta)}{\\sec^2(\\theta)}+C\\\\\n&=-8\\tan^{-1}\\left(\\sqrt{\\frac{4-x}{4+x}}\\right)+\\sqrt{16-x^2}+C\n\\end{align}\n$$</p>\n"
}
] | Question: How to integrate $\int\sqrt{\frac{4-x}{4+x}}$?
Answer 1: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 2: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 3: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 4: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 5: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 6: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 7: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 8: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 9: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 10: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 11: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 12: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 13: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 14: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 15: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 16: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 17: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 18: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 19: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 20: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 21: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
Answer 22: <p>First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. </p>
<p>The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$.</p>
<p>So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine.</p>
<p>Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner:</p>
<p>$$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$</p>
Answer 23: <p>Solving a more general problem, using integration by parts:</p>
<p>$$\mathcal{I}_\text{n}\left(x\right)=\int\sqrt{\frac{\text{n}-x}{\text{n}+x}}\space\text{d}x=x\sqrt{\frac{\text{n}-x}{\text{n}+x}}+\text{n}\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x$$</p>
<p>Now, substitute $\text{u}=\text{n}+x$ and $\text{d}\text{u}=\text{d}x$:</p>
<p>$$\int\frac{x}{\left(\text{n}+x\right)^2\sqrt{\frac{2\text{n}}{\text{n}+x}-1}}\space\text{d}x=\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>Seperate the integral:</p>
<p>$$\int\frac{\text{u}-\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{\text{u}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\int\frac{\text{n}}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=$$
$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}-\text{n}\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}$$</p>
<p>So:</p>
<ol>
<li>$$\int\frac{1}{\text{u}\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=\int\frac{1}{\sqrt{\text{u}}\sqrt{2\text{n}-\text{u}}}\space\text{d}\text{u}=2\arcsin\left\{\frac{\sqrt{\text{u}}}{\sqrt{2}\sqrt{\text{n}}}\right\}+\text{C}_1$$</li>
<li>Substitute $\text{s}=\frac{2\text{n}}{\text{u}}-1$ and $\text{d}\text{s}=-\frac{2\text{n}}{\text{u}^2}\space\text{d}\text{u}$:
$$\int\frac{1}{\text{u}^2\sqrt{\frac{2\text{n}}{\text{u}}-1}}\space\text{d}\text{u}=-\frac{1}{2\text{n}}\int\frac{1}{\sqrt{\text{s}}}\space\text{d}\text{s}=\text{C}_2-\frac{\sqrt{\frac{2\text{n}}{\text{u}}-1}}{\text{n}}$$</li>
</ol>
Answer 24: <p>Substitute $x=4\frac{1-y^2}{1+y^2}$ so that $y=\sqrt{\frac{4-x}{4+x}}$:
$$
\begin{align}
\int\sqrt{\frac{4-x}{4+x}}\,\mathrm{d}x
&=4\int y\,\mathrm{d}\frac{1-y^2}{1+y^2}\\
&=-16\int\frac{y^2}{\left(1+y^2\right)^2}\,\mathrm{d}y\\
&=-16\int\frac{\tan^2(\theta)}{\sec^2(\theta)}\,\mathrm{d}\theta\\
&=-16\int\sin^2(\theta)\,\mathrm{d}\theta\\
&=-16\int\frac{1-\cos(2\theta)}2\,\mathrm{d}\theta\\
&=-8\theta+4\sin(2\theta)+C\\
&=-8\theta+8\frac{\tan(\theta)}{\sec^2(\theta)}+C\\
&=-8\tan^{-1}\left(\sqrt{\frac{4-x}{4+x}}\right)+\sqrt{16-x^2}+C
\end{align}
$$</p>
|
What is the difference between $O(2^{n})$ and $2^{O(n)}$? | 2,089,438 | 5 | https://math.stackexchange.com/questions/2089438/what-is-the-difference-between-o2n-and-2on | <p>In the context of complexity, I have seen both $O(2^{n})$ and $2^{O(n)}$ used.</p>
<p>Whats the difference between the two?</p>
| [
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
},
{
"answer_id": 2089443,
"score": 15,
"text": "<p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>\n"
},
{
"answer_id": 2089447,
"score": 1,
"text": "<p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\\mid f(n) \\mid \\leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\\mid f(n)\\mid \\leq 2^{M.n}$ for large $n$.</p>\n"
}
] | Question: What is the difference between $O(2^{n})$ and $2^{O(n)}$?
Answer 1: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 2: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 3: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 4: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 5: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 6: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 7: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 8: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 9: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 10: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 11: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 12: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 13: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 14: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
Answer 15: <p>$2^{O(n)}$ is larger. For $O(2^n)$ is exactly functions bounded by $K\cdot 2^n$ for some large $K$, while $2^{O(n)}$ includes things like $4^n = 2^{2n}$, which is not $O(2^n)$.</p>
Answer 16: <p>$f(n)=O(2^n)$ means there exists constant $M>0$ such that $\mid f(n) \mid \leq M.2^n$ whereas $f(n)=2^{O(n)}$ means there exists constant $M>0$ such that $\mid f(n)\mid \leq 2^{M.n}$ for large $n$.</p>
|
Visual Explanation of the Dominated Convergence Theorem | 2,089,845 | 5 | https://math.stackexchange.com/questions/2089845/visual-explanation-of-the-dominated-convergence-theorem | <p>From a pedagogical perspective, is there a way to explain to the <strong>Dominated Convergence Theorem</strong> using visual examples? This may sound tasteless for mathematicians but many times, engineering students do not have exposure to the ways of thinking that mathematicians are comfortable with. </p>
<p>I was hoping that with examples of plots of sequence of functions $f_n(x)$, there must be an easy way of explaining this beautiful result. </p>
<p>Your help and creative ways to explaining this theorem are appreciated. </p>
| [
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
},
{
"answer_id": 2089901,
"score": 1,
"text": "<p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>\n\n<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>\n\n<p><strong>Addendum</strong>\nFor visualization's sake, a good counterexample is $f_n(x) = nxe^{-\\frac{1}{2}nx^2}$ for $x \\ge 0.$</p>\n"
},
{
"answer_id": 3934053,
"score": 0,
"text": "<p>Here's a simple example that you can easily visualize.\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\n1 & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n$$</span>\nThe domain of <span class=\"math-container\">$f_{n}$</span> is <span class=\"math-container\">$[0,\\infty)$</span>.</p>\n<p>There's no dominating function that is integrable, but the sequence converges to <span class=\"math-container\">$0$</span> pointwise. As <span class=\"math-container\">$n$</span> increases, the strictly positive part of <span class=\"math-container\">$f_{n}$</span> escapes to <span class=\"math-container\">$\\infty$</span>, and the dominating convergence theorem fails.</p>\n<p>Now take any integrable function <span class=\"math-container\">$g > 0$</span> on <span class=\"math-container\">$[0,\\infty)$</span>. If we change the above example to\n<span class=\"math-container\">$$\nf_{n}(x) =\n\\begin{cases}\ng(x) & \\text{if $x \\in [n,n+1)$,}\n\\\\\n0 & \\text{otherwise,}\n\\end{cases}\n$$</span>\nthen the strictly positive part of <span class=\"math-container\">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class=\"math-container\">$\\infty$</span>. So the dominating convergence theorem holds.</p>\n<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>\n"
}
] | Question: Visual Explanation of the Dominated Convergence Theorem
Answer 1: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 2: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 3: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 4: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 5: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 6: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 7: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 8: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 9: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 10: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 11: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 12: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 13: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 14: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
Answer 15: <p>I guess the best approach is to think of what can go wrong for non-dominated functions. A key counterexample to interchanging limits and integrals is if you can construct a sequence of functions that converges pointwise to 0 but all have constant area. So basically a sequence of functions that converges to an infinitely tall spike at zero with area 1. We can see immediately that a bounded sequence cannot have this problem. </p>
<p>However, nor can a dominated sequence. There can only be a finite amount of area under the dominating function, so the 'spike' cannot compensate to stay of constant area by getting taller as it thins out. Its area must eventually go to zero.</p>
<p><strong>Addendum</strong>
For visualization's sake, a good counterexample is $f_n(x) = nxe^{-\frac{1}{2}nx^2}$ for $x \ge 0.$</p>
Answer 16: <p>Here's a simple example that you can easily visualize.
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
1 & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise.}
\end{cases}
$$</span>
The domain of <span class="math-container">$f_{n}$</span> is <span class="math-container">$[0,\infty)$</span>.</p>
<p>There's no dominating function that is integrable, but the sequence converges to <span class="math-container">$0$</span> pointwise. As <span class="math-container">$n$</span> increases, the strictly positive part of <span class="math-container">$f_{n}$</span> escapes to <span class="math-container">$\infty$</span>, and the dominating convergence theorem fails.</p>
<p>Now take any integrable function <span class="math-container">$g > 0$</span> on <span class="math-container">$[0,\infty)$</span>. If we change the above example to
<span class="math-container">$$
f_{n}(x) =
\begin{cases}
g(x) & \text{if $x \in [n,n+1)$,}
\\
0 & \text{otherwise,}
\end{cases}
$$</span>
then the strictly positive part of <span class="math-container">$f_{n}$</span> asymptotically disappears (fast enough) before it escapes to <span class="math-container">$\infty$</span>. So the dominating convergence theorem holds.</p>
<p>You can understand the role of a dominating function as a constraint that prevents the above escaping behavior.</p>
|
Power series expansion of $\frac{z}{(z^3+1)^2}$ around $z=1$ | 2,090,361 | 5 | https://math.stackexchange.com/questions/2090361/power-series-expansion-of-fraczz312-around-z-1 | <p>I want to expand $f(z)=\frac{z}{(z^3+1)^2}$ around $z=1$. That is, I want to find the coefficients $c_n$ such that $f(z) = \sum_{n=0}^\infty c_n (z-1)^n$.</p>
<p>So far, my <strong>first strategy</strong> was using long division after I expanded both the numerator and the denominator about $z=1$. To this end, we first apply a substitution $u:= z-1$, after we'll expand around $u=0$.</p>
<p>For the denominator we simply get $$1 + u$$.</p>
<p>For the numerator we have </p>
<p>$$
\frac{1}{\big((u+1)^3+1\big)^2} = \frac{1}{2^2} \frac{1}{\big(1 + \frac{u^3+3u(u+1)}{2}\big)^2} \quad \mbox{put $x:= \frac{u^3+3u(u+1)}{2}$}\\
=-\frac{1}{2^2}\frac{\partial d}{\partial x}\frac{1}{x+1} \\
= -\frac{1}{2^2}(-1+2x-3x^2 +\ \ldots\ ) \\
=-\frac{1}{2^2}\big(1+6u-21u^2-52u^3+\ \ldots \ \big)
$$</p>
<p>After using long division, we could try to see a pattern in the coefficients of $u^n$ but this doesn't seem to bear any quick results.</p>
<hr>
<p>My <strong>second attempt</strong> was using the binomial theorem to expand the power series of $$f(z) = \sum_{n=0}^\infty a_n z^{3n+1} \quad \mbox{with} \quad a_n = (n+1)(-1)^n$$ about $z=1$.</p>
<p>To this end, we need to reorganize the $u^n$ in the following sum</p>
<p>$$
\sum_{n=0}^\infty (u+1)^n = \sum_{n=0}^\infty \sum_{k=0}^n \binom nk u^k .
$$</p>
<p>This seems fairly laborious. Below we have the terms for increasing $n$.</p>
<p>$$
1 \\
1 + u \\
1 + 2u + u^2 \\
1 + 3u + 3u^2 + u^3 \\
\vdots \\
1 + mu + \binom m2 + \ldots + mu^{m-1} + u^m \\
1 + (m+1)u + \binom{m+1}{2}u^2 + \ldots + (m+1)u^m + u^{m+1} \\
1 + (m+2)u + \binom{m+2}{2}u^2 + \ldots + (m+2)u^{m+1} + u^{m+2} \\
\vdots
$$</p>
<p>If we look at the terms of $1$, $u$ and $u^2$, their coefficients $b_n$ in the sum over $n$ suggest the following coefficients for $1$, $u$ and $u^2$ respectively.</p>
<p>$$
b_0 = \sum_{n=0}^\infty 1 \\
b_1 =\sum_{n=0}^\infty n \\
b_2 = \sum_{n=0}^\infty \binom n2 \ .
$$ </p>
<p>Expanding on this, we arrive at</p>
<p>$$
\sum_{i=0}^\infty b_i u^i = \sum_{i=0}^\infty \bigg( \sum_{k=0}^\infty \binom ki \bigg)u^i
$$
, which seems very incorrect, since we have a double infinite sum.</p>
| [
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
},
{
"answer_id": 2097807,
"score": 4,
"text": "<blockquote>\n <p>The following expansion of $f(z)=\\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>\n \n <p>\\begin{align*}\nf(z)&=\\frac{z}{(1+z^3)^2}\\\\\n&=\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+3-(-1)^j\\frac{3j+3}{2^{3j+2}}\\right)(z-1)^{3j}\\\\\n&\\qquad-\\frac{1}{9}\\sum_{j=0}^\\infty\\left(6j+5-(-1)^j\\frac{3j+4}{2^{3j+3}}\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\frac{1}{9}\\sum_{j=0}^\\infty\\left(3j+2-(-1)^j\\frac{3j+5}{2^{3j+4}}\\right)(z-1)^{3j+2}\\qquad\\qquad |z-1|<1\\tag{1}\\\\\n&=\\frac{1}{4}-\\frac{1}{2}(z-1)+\\frac{3}{16}(z-1)^2+\\frac{11}{16}(z-1)^3\\\\\n&\\qquad-\\frac{79}{64}(z-1)^4+\\frac{9}{16}(z-1)^5+\\frac{255}{256}(z-1)^6+\\cdots\n\\end{align*}</p>\n</blockquote>\n\n<p>We perform a <em><a href=\"https://en.wikipedia.org/wiki/Partial_fraction_decomposition\" rel=\"nofollow noreferrer\">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are\n\\begin{align*}\n\\zeta_0&=-1\\\\\n\\zeta_1&=e^\\frac{\\pi i}{3}=\\frac{1}{2}\\left(1+i\\sqrt{3}\\right)\\\\\n\\zeta_2&=e^{-\\frac{\\pi i}{3}}=\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)=\\overline{\\zeta_1}\n\\end{align*}\nSince each of the zeros has muliplicity two we use the Ansatz:\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\tag{2}\n\\end{align*}</p>\n\n<blockquote>\n <p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>\n \n <p>\\begin{align*}\n\\frac{z}{(z^3+1)^2}=\\frac{1}{9}\\left(-\\frac{z+2}{(z+1)^2}+\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\\tag{3}\n\\end{align*}</p>\n \n <p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>\n</blockquote>\n\n<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>\n\n<p>We obtain\n\\begin{align*}\n\\frac{1}{(z-z_0)^n}&=\\frac{1}{\\left((z_1-z_0)+(z-z_1)\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\cdot\\frac{1}{\\left(1+\\frac{z-z_1}{z_1-z_0}\\right)^n}\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{-n}{j}\\left(\\frac{z-z_0}{z_1-z_0}\\right)^j\\\\\n&=\\frac{1}{(z_1-z_0)^n}\\sum_{j=0}^\\infty \\binom{n+j-1}{j}\\left(\\frac{z-z_0}{z_0-z_1}\\right)^j\\tag{4}\\\\\n\\end{align*}</p>\n\n<p>Since we have to do a lot of calculations with the zeros\n$$\\left\\{-1,\\frac{1}{2}\\left(1+i\\sqrt{3}\\right),\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right\\}=\\left\\{-1,\\zeta_1,\\overline{\\zeta_1}\\right\\}$$ \nits convenient to prepare some intermediate results\n\\begin{array}{r|rrr|lll}\n\\cdot&-1&\\zeta_1&\\overline{\\zeta_1}\\\\\n\\hline\n-1&1&-\\zeta_1&-\\overline{\\zeta_1}&\\qquad \\frac{1}{\\zeta_0-1}=-\\frac{1}{2}\\\\\n\\zeta_1&-\\zeta_1&-\\overline{\\zeta_1}&1\n&\\qquad \\frac{1}{\\zeta_1-1}=-\\zeta_1&\\frac{1}{\\left(\\zeta_1-1\\right)^2}=-\\overline{\\zeta_1}\n&\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1\\\\\n\\overline{\\zeta_1}&-\\overline{\\zeta_1}&1&-\\zeta_1\n&\\qquad \\frac{1}{\\overline{\\zeta_1}-1}=-\\overline{\\zeta_1}&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^2}=-\\zeta_1\n&\\frac{1}{\\left(\\overline{\\zeta_1}-1\\right)^3}=1\\tag{5}\\\\\n\\end{array}</p>\n\n<p>We are now ready and well prepared to continue.</p>\n\n<blockquote>\n <p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives\n \\begin{align*}\n\\frac{1}{\\left(z-z_0\\right)^2}=\\frac{1}{\\left(z_0-1\\right)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\n\\end{align*}\n Since we use in (2) a numerator $\\alpha z+\\beta$ we respect it also in the expansion\n \\begin{align*}\n\\frac{\\alpha z+\\beta}{\\left(z-z_0\\right)^2}\n&=\\frac{\\alpha(z-1)+\\alpha+\\beta}{(z_0-1)^2}\\sum_{j=0}^\\infty\\frac{j+1}{(z_0-1)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{(\\alpha z_0+\\beta)j+\\alpha+\\beta}{(z_0-1)^{j+2}}(z-1)^j\\tag{6}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. We take the three terms of (2) with $-1,\\zeta_1,\\overline{\\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\\frac{1}{\\left(\\zeta_1-1\\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>\n\n<blockquote>\n <p>We obtain using the relations in (5)\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n&=\\sum_{j=0}^\\infty\\frac{\\left(\\overline{\\zeta_1}\\zeta_1-1\\right)j+\\overline{\\zeta_1}-2}{\\left(\\zeta_1-1\\right)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\left(-\\overline{\\zeta_1}\\right)\\left(-j+\\overline{\\zeta_1}-2\\right)}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{\\overline{\\zeta_1}(j+2)+\\zeta_1}{\\left(\\zeta_1-1\\right)^j}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\zeta_1\\right)(z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}+\\zeta_1\\right]\\left(-\\overline{\\zeta_1}\\right)(z-1)^{3j+2}\\\\\n&=\\sum_{j=0}^\\infty\\left[(3j+2)\\overline{\\zeta_1}+\\zeta_1\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+3)(-1)+\\overline{\\zeta_1}\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[(3j+4)\\overline{\\zeta_1}-1\\right](z-1)^{3j+2}\\\\\n\\end{align*}\n We can now earn that first and second term are complex conjugates and obtain\n \\begin{align*}\n\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n&=\\sum_{j=0}^\\infty\\left[3j+3\\right](z-1)^{3j}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[-6j-5\\right](z-1)^{3j+1}\\\\\n&\\qquad+\\sum_{j=0}^\\infty\\left[3j+2\\right](z-1)^{3j+2}\\tag{7}\\\\\n\\end{align*}</p>\n</blockquote>\n\n<p>Last but not least the final term of (2)</p>\n\n<blockquote>\n <p>\\begin{align*}\n\\frac{z+2}{(z+1)^2}&=\\sum_{j=0}^\\infty\\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty\\frac{j+3}{(-2)^{j+2}}(z-1)^j\\\\\n&=\\sum_{j=0}^\\infty (-1)^j\\frac{3j+3}{2^{3j}}(z-1)^{3j}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j+1}\\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\\\\n&\\qquad\\sum_{j=0}^\\infty (-1)^{j}\\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\\tag{8}\\\\\n\\end{align*}</p>\n \n <p>Combining (7) and (8) and the claim (1) follows.</p>\n</blockquote>\n\n<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\\zeta_1$ and $\\overline{\\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is\n\\begin{align*}\n\\left|1-\\zeta_1\\right|&=\\left|1-\\overline{\\zeta_1}\\right|=\\left|1-\\frac{1}{2}\\left(1-i\\sqrt{3}\\right)\\right|\\\\\n&=\\frac{1}{2}\\left|1+i\\sqrt{3}\\right|\\\\\n&=1\n\\end{align*}</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Add-on 2017-01-15:</strong></p>\n \n <p>OPs was asking how to obtain the coefficients $\\alpha_j,\\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>\n \n <p>\\begin{align*}\n\\frac{z}{(1+z^3)^2}=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\n\\end{align*}</p>\n</blockquote>\n\n<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\\leq k\\leq 5$ and obtain this way six equations for the six unknowns.</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\beta_0\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z+\\beta_1\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z+\\beta_2\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{9}\n\\end{align*}</p>\n\n<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>\n\n<blockquote>\n <p><strong>Step 1:</strong> Coefficients of highest power two.</p>\n \n <p>We consider\n \\begin{align*}\n\\frac{z}{(z^3+1)^2}=\n\\frac{\\alpha_0z+\\beta_0}{\\left(z+1\\right)^2}\n+\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\tag{10}\n\\end{align*}</p>\n \n <p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$. \n \\begin{align*}\n\\frac{z}{(z^3+1)^2}\\cdot(z+1)^2&=\\left(\\alpha_0z+\\beta_0\\right)\n+\\left(\\frac{\\alpha_1z+\\beta_1}{\\left(z-\\zeta_1\\right)^2}+\\frac{\\alpha_2z+\\beta_2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\\right)\n\\cdot\\left(z+1\\right)^2\n\\end{align*}\n Note that at the left hand side $-1$ is a <em><a href=\"https://en.wikipedia.org/wiki/Removable_singularity\" rel=\"nofollow noreferrer\">removable singularity</a></em> and evaluating both sides at $z=-1$ gives\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}=-\\alpha_0+\\beta_0}\n\\end{align*}</p>\n \n <p>Multiplication of (9) with $(z-\\zeta_1)^2$ and with $(z-\\overline{\\zeta_1})^2$ and evaluating at $z=\\zeta_1$ and $z=\\overline{\\zeta_1}$ gives two more equations\n \\begin{align*}\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_1\\zeta_1+\\beta_1}\\\\\n\\color{blue}{-\\frac{1}{9}}&\\color{blue}{=-\\alpha_2\\overline{\\zeta_1}+\\beta_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>We put the results of these three equations into (9) and obtain</p>\n\n<p>\\begin{align*}\nz&=\\left(\\alpha_0z+\\left(\\alpha_0-1/9\\right)\\right)\\left(z-\\zeta_1\\right)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_1z-(\\alpha_1\\zeta_1+1/9)\\right)(z+1)^2\\left(z-\\overline{\\zeta_1}\\right)^2\\\\\n&\\qquad+\\left(\\alpha_2z-(\\alpha_1\\overline{\\zeta_1}+1/9)\\right)(z+1)^2\\left(z-\\zeta_1\\right)^2\\tag{11}\n\\end{align*}</p>\n\n<p>Since we have three unknowns $\\alpha_0,\\alpha_1,\\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\\binom{5}{2}=10$ terms.</p>\n\n<p>We will also consequently use the formulas from (5) to simplify expressions.</p>\n\n<blockquote>\n <p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>\n \n <p>Coefficient comparison $[z^0]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[\\left(\\alpha_0-\\frac{1}{9}\\right)\\zeta_1^2\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_1+\\frac{1}{9}\\right)\\overline{\\zeta_1}^2\\right]\n+\\left[-\\left(\\alpha_2+\\frac{1}{9}\\right)\\zeta_1^2\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{9}\\right]+\\left[-\\alpha_1\\overline{\\zeta_1}+\\frac{1}{9}\\zeta_1\\right]\n+\\left[-\\alpha_2\\zeta_1+\\frac{1}{9}\\overline{\\zeta_1}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^1]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n1&=\\left[\\alpha_0\\zeta_1^2\\overline{\\zeta_1}^2+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_1\\overline{\\zeta_1}^2-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[\\alpha_2\\zeta_1^2-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2-\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0+2\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_1\\zeta_1-2\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-\\alpha_2\\overline{\\zeta_1}-2\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\\right]\\\\\n&=\\left[-\\alpha_0+\\frac{1}{3}\\right]+\\left[\\alpha_1\\zeta_1+\\frac{1}{3}\\right]\n+\\left[\\alpha_2\\overline{\\zeta_1}+\\frac{1}{3}\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}}\n\\end{align*}</p>\n \n <p>Coefficient comparison $[z^2]$:</p>\n \n <p>We obtain from (11)\n \\begin{align*}\n0&=\\left[2\\alpha_0\\left(-\\zeta_1\\overline{\\zeta_1}^2-\\zeta_1^2\\overline{\\zeta_1}\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+\\zeta_1^2+4\\zeta_1\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(\\overline{\\zeta_1}^2-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(\\overline{\\zeta_1}^2+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(\\zeta_1^2-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(\\zeta_1^2+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[2\\alpha_0\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n+\\left(\\alpha_0-\\frac{1}{9}\\right)\\left(-\\zeta_1-\\overline{\\zeta_1}+4\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_1\\left(-\\zeta_1-\\overline{\\zeta_1}\\right)\n-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-\\zeta_1+1-4\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[2\\alpha_2\\left(-\\overline{\\zeta_1}-\\zeta_1\\right)\n-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-\\overline{\\zeta_1}+1-4\\zeta_1\\right)\\right]\\\\\n&=\\left[-2\\alpha_0+\\left(\\alpha_0-\\frac{1}{9}\\right)3\\right]+\\left[-2\\alpha_1-\\left(\\alpha_1\\zeta_1+\\frac{1}{9}\\right)\\left(-3\\overline{\\zeta_1}\\right)\\right]\\\\\n&\\qquad+\\left[-2\\alpha_2-\\left(\\alpha_2\\overline{\\zeta_1}+\\frac{1}{9}\\right)\\left(-3\\zeta_1\\right)\\right]\\\\\n&=\\left[\\alpha_0-\\frac{1}{3}\\right]+\\left[\\alpha_1+\\frac{1}{3}\\overline{\\zeta_1}\\right]\n+\\left[\\alpha_2+\\frac{1}{3}\\zeta_1\\right]\\\\\n\\color{blue}{0}&\\color{blue}{=\\alpha_0+\\alpha_1+\\alpha_2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Now it's time to harvest. </p>\n\n<blockquote>\n <p><strong>Step 3:</strong> Linear equations for $\\alpha_0,\\alpha_1,\\alpha_2$</p>\n \n <p>We obtain via comparison of coefficients the three equations\n \\begin{align*}\n\\alpha_1-\\alpha_1\\overline{\\zeta_1}-\\alpha_2\\zeta_1&=0\\\\\n\\alpha_0-\\alpha_1\\zeta_1-\\alpha_2\\overline{\\zeta_1}&=0\\\\\n\\alpha_0+\\alpha_1+\\alpha_2&=0\n\\end{align*}\n with solutions\n \\begin{align*}\n\\alpha_0=-\\frac{1}{9},\\quad\\alpha_1=\\frac{1}{9}\\overline{\\zeta_1},\\quad\\alpha_2=\\frac{1}{9}\\zeta_1\n\\end{align*}\n We obtain by putting these values in the equations of step 1 \n \\begin{align*}\n\\beta_0&=\\alpha_0-\\frac{1}{9}=-\\frac{2}{9}\\\\\n\\beta_1&=\\alpha_1\\zeta_1-\\frac{1}{9}=\\frac{1}{9}\\overline{\\zeta_1}\\zeta_1=\\frac{1}{9}\\\\\n\\beta_2&=\\alpha_2\\overline{\\zeta_1}-\\frac{1}{9}=\\frac{1}{9}\\zeta_1\\overline{\\zeta_1}=\\frac{1}{9}\n\\end{align*}</p>\n</blockquote>\n\n<p>and we finally conclude\n\\begin{align*}\n\\frac{z}{(1+z^3)^2}&=\\sum_{j=0}^2\\frac{\\alpha_j z+\\beta_j}{(z-\\zeta_j)^2}\\\\\n&=\\frac{1}{9}\\left(\n-\\frac{z+2}{(z+1)^2}\n+\\frac{\\overline{\\zeta_1}z-2}{\\left(z-\\zeta_1\\right)^2}\n+\\frac{\\zeta_1z-2}{\\left(z-\\overline{\\zeta_1}\\right)^2}\n\\right)\n\\end{align*}</p>\n"
},
{
"answer_id": 2090379,
"score": 2,
"text": "<p>Notice that</p>\n\n<p>$$\\frac z{(1+z^3)^2}=-z\\frac{\\partial}{\\partial z^3}\\frac1{1+z^3}=-z\\frac\\partial{\\partial z^3}\\sum_{n=0}^\\infty(-1)^n(z^3)^n=\\sum_{n=0}^\\infty n(-1)^{n+1}z^{3n-2}$$</p>\n\n<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href=\"https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point\">this answer</a> to get the expansion at $z=1$.</p>\n\n<p>$$=\\sum_{j=0}^\\infty \\left(\\sum_{k=j}^\\infty \\binom kj a_k \\right)(z-1)^j$$</p>\n\n<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\\in\\mathbb N$, else $a_k=0$.</p>\n\n<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our \"expansion\" yields</p>\n\n<p>$$\\frac14\\stackrel?=1-2+3-4+5-6+\\dots$$</p>\n\n<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href=\"https://en.wikipedia.org/wiki/Euler_summation\" rel=\"nofollow noreferrer\">Euler sum</a> to it:</p>\n\n<p>$$f(z)=\\sum_{j=0}^\\infty\\sum_{k=\\lfloor j/3\\rfloor}^\\infty \\binom{3k+1}j (-1)^k(k+1)(z-1)^j\\\\=\\sum_{j=0}^\\infty\\sum_{i=\\lfloor j/3\\rfloor}^\\infty\\frac1{2^i}\\sum_{k=\\lfloor j/3\\rfloor}^i\\binom ik\\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>\n\n<p>which now converges as desired.</p>\n"
}
] | Question: Power series expansion of $\frac{z}{(z^3+1)^2}$ around $z=1$
Answer 1: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 2: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 3: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 4: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 5: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 6: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 7: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 8: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 9: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 10: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 11: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 12: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 13: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 14: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
Answer 15: <blockquote>
<p>The following expansion of $f(z)=\frac{z}{(1+z^3)^2}$ at $z=1$ is valid</p>
<p>\begin{align*}
f(z)&=\frac{z}{(1+z^3)^2}\\
&=\frac{1}{9}\sum_{j=0}^\infty\left(3j+3-(-1)^j\frac{3j+3}{2^{3j+2}}\right)(z-1)^{3j}\\
&\qquad-\frac{1}{9}\sum_{j=0}^\infty\left(6j+5-(-1)^j\frac{3j+4}{2^{3j+3}}\right)(z-1)^{3j+1}\\
&\qquad+\frac{1}{9}\sum_{j=0}^\infty\left(3j+2-(-1)^j\frac{3j+5}{2^{3j+4}}\right)(z-1)^{3j+2}\qquad\qquad |z-1|<1\tag{1}\\
&=\frac{1}{4}-\frac{1}{2}(z-1)+\frac{3}{16}(z-1)^2+\frac{11}{16}(z-1)^3\\
&\qquad-\frac{79}{64}(z-1)^4+\frac{9}{16}(z-1)^5+\frac{255}{256}(z-1)^6+\cdots
\end{align*}</p>
</blockquote>
<p>We perform a <em><a href="https://en.wikipedia.org/wiki/Partial_fraction_decomposition" rel="nofollow noreferrer">partial fraction decomposition</a></em> of $f(z)$. The zeros of the denominator $(1+z^3)^2$ are
\begin{align*}
\zeta_0&=-1\\
\zeta_1&=e^\frac{\pi i}{3}=\frac{1}{2}\left(1+i\sqrt{3}\right)\\
\zeta_2&=e^{-\frac{\pi i}{3}}=\frac{1}{2}\left(1-i\sqrt{3}\right)=\overline{\zeta_1}
\end{align*}
Since each of the zeros has muliplicity two we use the Ansatz:
\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\tag{2}
\end{align*}</p>
<blockquote>
<p>Multiplication of the RHS of (2) and comparison of coefficients gives</p>
<p>\begin{align*}
\frac{z}{(z^3+1)^2}=\frac{1}{9}\left(-\frac{z+2}{(z+1)^2}+
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)\tag{3}
\end{align*}</p>
<p>Note, the second summand of the RHS of (3) is the complex conjugate of the third summand. This will help to simplify some calculations.</p>
</blockquote>
<p>Next we want to expand the terms of the RHS of (3) at $z=1$. In order to do so it's convenient to do some preparatory work. We consider the general aspect of expanding a term around $z=z_0$.</p>
<p>We obtain
\begin{align*}
\frac{1}{(z-z_0)^n}&=\frac{1}{\left((z_1-z_0)+(z-z_1)\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\cdot\frac{1}{\left(1+\frac{z-z_1}{z_1-z_0}\right)^n}\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{-n}{j}\left(\frac{z-z_0}{z_1-z_0}\right)^j\\
&=\frac{1}{(z_1-z_0)^n}\sum_{j=0}^\infty \binom{n+j-1}{j}\left(\frac{z-z_0}{z_0-z_1}\right)^j\tag{4}\\
\end{align*}</p>
<p>Since we have to do a lot of calculations with the zeros
$$\left\{-1,\frac{1}{2}\left(1+i\sqrt{3}\right),\frac{1}{2}\left(1-i\sqrt{3}\right)\right\}=\left\{-1,\zeta_1,\overline{\zeta_1}\right\}$$
its convenient to prepare some intermediate results
\begin{array}{r|rrr|lll}
\cdot&-1&\zeta_1&\overline{\zeta_1}\\
\hline
-1&1&-\zeta_1&-\overline{\zeta_1}&\qquad \frac{1}{\zeta_0-1}=-\frac{1}{2}\\
\zeta_1&-\zeta_1&-\overline{\zeta_1}&1
&\qquad \frac{1}{\zeta_1-1}=-\zeta_1&\frac{1}{\left(\zeta_1-1\right)^2}=-\overline{\zeta_1}
&\frac{1}{\left(\zeta_1-1\right)^3}=1\\
\overline{\zeta_1}&-\overline{\zeta_1}&1&-\zeta_1
&\qquad \frac{1}{\overline{\zeta_1}-1}=-\overline{\zeta_1}&\frac{1}{\left(\overline{\zeta_1}-1\right)^2}=-\zeta_1
&\frac{1}{\left(\overline{\zeta_1}-1\right)^3}=1\tag{5}\\
\end{array}</p>
<p>We are now ready and well prepared to continue.</p>
<blockquote>
<p>Evaluation of (4) with $n=2$ and at $z_1=1$ gives
\begin{align*}
\frac{1}{\left(z-z_0\right)^2}=\frac{1}{\left(z_0-1\right)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j
\end{align*}
Since we use in (2) a numerator $\alpha z+\beta$ we respect it also in the expansion
\begin{align*}
\frac{\alpha z+\beta}{\left(z-z_0\right)^2}
&=\frac{\alpha(z-1)+\alpha+\beta}{(z_0-1)^2}\sum_{j=0}^\infty\frac{j+1}{(z_0-1)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{(\alpha z_0+\beta)j+\alpha+\beta}{(z_0-1)^{j+2}}(z-1)^j\tag{6}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. We take the three terms of (2) with $-1,\zeta_1,\overline{\zeta_1}$ and evaluate them using (6). We also note that according to (5) $\frac{1}{\left(\zeta_1-1\right)^3}=1$ so that we can split the sum according to the index $j$ modulo $3$.</p>
<blockquote>
<p>We obtain using the relations in (5)
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
&=\sum_{j=0}^\infty\frac{\left(\overline{\zeta_1}\zeta_1-1\right)j+\overline{\zeta_1}-2}{\left(\zeta_1-1\right)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\left(-\overline{\zeta_1}\right)\left(-j+\overline{\zeta_1}-2\right)}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{\overline{\zeta_1}(j+2)+\zeta_1}{\left(\zeta_1-1\right)^j}(z-1)^j\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)\overline{\zeta_1}+\zeta_1\right]\left(-\zeta_1\right)(z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}+\zeta_1\right]\left(-\overline{\zeta_1}\right)(z-1)^{3j+2}\\
&=\sum_{j=0}^\infty\left[(3j+2)\overline{\zeta_1}+\zeta_1\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+3)(-1)+\overline{\zeta_1}\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[(3j+4)\overline{\zeta_1}-1\right](z-1)^{3j+2}\\
\end{align*}
We can now earn that first and second term are complex conjugates and obtain
\begin{align*}
\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
&=\sum_{j=0}^\infty\left[3j+3\right](z-1)^{3j}\\
&\qquad+\sum_{j=0}^\infty\left[-6j-5\right](z-1)^{3j+1}\\
&\qquad+\sum_{j=0}^\infty\left[3j+2\right](z-1)^{3j+2}\tag{7}\\
\end{align*}</p>
</blockquote>
<p>Last but not least the final term of (2)</p>
<blockquote>
<p>\begin{align*}
\frac{z+2}{(z+1)^2}&=\sum_{j=0}^\infty\frac{(-1+2)j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty\frac{j+3}{(-2)^{j+2}}(z-1)^j\\
&=\sum_{j=0}^\infty (-1)^j\frac{3j+3}{2^{3j}}(z-1)^{3j}\\
&\qquad\sum_{j=0}^\infty (-1)^{j+1}\frac{3j+4}{2^{3j+1}}(z-1)^{3j+1}\\
&\qquad\sum_{j=0}^\infty (-1)^{j}\frac{3j+5}{2^{3j+1}}(z-1)^{3j+2}\tag{8}\\
\end{align*}</p>
<p>Combining (7) and (8) and the claim (1) follows.</p>
</blockquote>
<p><strong>Note:</strong> The nearest singularities of $z=1$ are $\zeta_1$ and $\overline{\zeta_1}$, both having the same distance to $1$. This shows the radius of convergence is
\begin{align*}
\left|1-\zeta_1\right|&=\left|1-\overline{\zeta_1}\right|=\left|1-\frac{1}{2}\left(1-i\sqrt{3}\right)\right|\\
&=\frac{1}{2}\left|1+i\sqrt{3}\right|\\
&=1
\end{align*}</p>
<hr>
<blockquote>
<p><strong>Add-on 2017-01-15:</strong></p>
<p>OPs was asking how to obtain the coefficients $\alpha_j,\beta_j$, $j=0,1,2$ in the partial fraction decomposition (2)</p>
<p>\begin{align*}
\frac{z}{(1+z^3)^2}=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}
\end{align*}</p>
</blockquote>
<p>One method is to multiply both sides with the denominator and perform a coefficient comparison for $[z^k]$, the coefficient of $z^k$ with $0\leq k\leq 5$ and obtain this way six equations for the six unknowns.</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\beta_0\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z+\beta_1\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z+\beta_2\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{9}
\end{align*}</p>
<p>But its laborious to obtain six equations for <em>all</em> coefficients this way. We prefer a two step approach which reduces somewhat the calculations. Nevertheless it <em>is</em> somewhat tedious.</p>
<blockquote>
<p><strong>Step 1:</strong> Coefficients of highest power two.</p>
<p>We consider
\begin{align*}
\frac{z}{(z^3+1)^2}=
\frac{\alpha_0z+\beta_0}{\left(z+1\right)^2}
+\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}
+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\tag{10}
\end{align*}</p>
<p>We now multiply both sides of (10) with $(z+1)^2$ and evaluate the resulting equation at $z=-1$.
\begin{align*}
\frac{z}{(z^3+1)^2}\cdot(z+1)^2&=\left(\alpha_0z+\beta_0\right)
+\left(\frac{\alpha_1z+\beta_1}{\left(z-\zeta_1\right)^2}+\frac{\alpha_2z+\beta_2}{\left(z-\overline{\zeta_1}\right)^2}\right)
\cdot\left(z+1\right)^2
\end{align*}
Note that at the left hand side $-1$ is a <em><a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow noreferrer">removable singularity</a></em> and evaluating both sides at $z=-1$ gives
\begin{align*}
\color{blue}{-\frac{1}{9}=-\alpha_0+\beta_0}
\end{align*}</p>
<p>Multiplication of (9) with $(z-\zeta_1)^2$ and with $(z-\overline{\zeta_1})^2$ and evaluating at $z=\zeta_1$ and $z=\overline{\zeta_1}$ gives two more equations
\begin{align*}
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_1\zeta_1+\beta_1}\\
\color{blue}{-\frac{1}{9}}&\color{blue}{=-\alpha_2\overline{\zeta_1}+\beta_2}
\end{align*}</p>
</blockquote>
<p>We put the results of these three equations into (9) and obtain</p>
<p>\begin{align*}
z&=\left(\alpha_0z+\left(\alpha_0-1/9\right)\right)\left(z-\zeta_1\right)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_1z-(\alpha_1\zeta_1+1/9)\right)(z+1)^2\left(z-\overline{\zeta_1}\right)^2\\
&\qquad+\left(\alpha_2z-(\alpha_1\overline{\zeta_1}+1/9)\right)(z+1)^2\left(z-\zeta_1\right)^2\tag{11}
\end{align*}</p>
<p>Since we have three unknowns $\alpha_0,\alpha_1,\alpha_2$ left, we need three more equations for determination. We next consider coefficient comparison for $z^0,z^1$ and $z^2$. Observe that each of the three summands in (11) contain five factors. So for $[z^0]$ we derive from each summand $\binom{5}{0}=1$ terms, for $[z^1]$ we derive $\binom{5}{1}=5$ terms and for $[z^2]$ we derive from each summand $\binom{5}{2}=10$ terms.</p>
<p>We will also consequently use the formulas from (5) to simplify expressions.</p>
<blockquote>
<p><strong>Step 2:</strong> Coefficient comparison for $[z^0],[z^1]$ and $[z^2]$.</p>
<p>Coefficient comparison $[z^0]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[\left(\alpha_0-\frac{1}{9}\right)\zeta_1^2\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_1+\frac{1}{9}\right)\overline{\zeta_1}^2\right]
+\left[-\left(\alpha_2+\frac{1}{9}\right)\zeta_1^2\right]\\
&=\left[\alpha_0-\frac{1}{9}\right]+\left[-\alpha_1\overline{\zeta_1}+\frac{1}{9}\zeta_1\right]
+\left[-\alpha_2\zeta_1+\frac{1}{9}\overline{\zeta_1}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1}
\end{align*}</p>
<p>Coefficient comparison $[z^1]$:</p>
<p>We obtain from (11)
\begin{align*}
1&=\left[\alpha_0\zeta_1^2\overline{\zeta_1}^2+2\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_1\overline{\zeta_1}^2-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[\alpha_2\zeta_1^2-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2-\zeta_1\right)\right]\\
&=\left[\alpha_0+2\left(\alpha_0-\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&\qquad+\left[-\alpha_1\zeta_1-2\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-\alpha_2\overline{\zeta_1}-2\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}-\zeta_1\right)\right]\\
&=\left[-\alpha_0+\frac{1}{3}\right]+\left[\alpha_1\zeta_1+\frac{1}{3}\right]
+\left[\alpha_2\overline{\zeta_1}+\frac{1}{3}\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}}
\end{align*}</p>
<p>Coefficient comparison $[z^2]$:</p>
<p>We obtain from (11)
\begin{align*}
0&=\left[2\alpha_0\left(-\zeta_1\overline{\zeta_1}^2-\zeta_1^2\overline{\zeta_1}\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(\overline{\zeta_1}^2+\zeta_1^2+4\zeta_1\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_1\left(\overline{\zeta_1}^2-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(\overline{\zeta_1}^2+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(\zeta_1^2-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(\zeta_1^2+1-4\zeta_1\right)\right]\\
&=\left[2\alpha_0\left(-\overline{\zeta_1}-\zeta_1\right)
+\left(\alpha_0-\frac{1}{9}\right)\left(-\zeta_1-\overline{\zeta_1}+4\right)\right]\\
&\qquad+\left[2\alpha_1\left(-\zeta_1-\overline{\zeta_1}\right)
-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-\zeta_1+1-4\overline{\zeta_1}\right)\right]\\
&\qquad+\left[2\alpha_2\left(-\overline{\zeta_1}-\zeta_1\right)
-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-\overline{\zeta_1}+1-4\zeta_1\right)\right]\\
&=\left[-2\alpha_0+\left(\alpha_0-\frac{1}{9}\right)3\right]+\left[-2\alpha_1-\left(\alpha_1\zeta_1+\frac{1}{9}\right)\left(-3\overline{\zeta_1}\right)\right]\\
&\qquad+\left[-2\alpha_2-\left(\alpha_2\overline{\zeta_1}+\frac{1}{9}\right)\left(-3\zeta_1\right)\right]\\
&=\left[\alpha_0-\frac{1}{3}\right]+\left[\alpha_1+\frac{1}{3}\overline{\zeta_1}\right]
+\left[\alpha_2+\frac{1}{3}\zeta_1\right]\\
\color{blue}{0}&\color{blue}{=\alpha_0+\alpha_1+\alpha_2}
\end{align*}</p>
</blockquote>
<p>Now it's time to harvest. </p>
<blockquote>
<p><strong>Step 3:</strong> Linear equations for $\alpha_0,\alpha_1,\alpha_2$</p>
<p>We obtain via comparison of coefficients the three equations
\begin{align*}
\alpha_1-\alpha_1\overline{\zeta_1}-\alpha_2\zeta_1&=0\\
\alpha_0-\alpha_1\zeta_1-\alpha_2\overline{\zeta_1}&=0\\
\alpha_0+\alpha_1+\alpha_2&=0
\end{align*}
with solutions
\begin{align*}
\alpha_0=-\frac{1}{9},\quad\alpha_1=\frac{1}{9}\overline{\zeta_1},\quad\alpha_2=\frac{1}{9}\zeta_1
\end{align*}
We obtain by putting these values in the equations of step 1
\begin{align*}
\beta_0&=\alpha_0-\frac{1}{9}=-\frac{2}{9}\\
\beta_1&=\alpha_1\zeta_1-\frac{1}{9}=\frac{1}{9}\overline{\zeta_1}\zeta_1=\frac{1}{9}\\
\beta_2&=\alpha_2\overline{\zeta_1}-\frac{1}{9}=\frac{1}{9}\zeta_1\overline{\zeta_1}=\frac{1}{9}
\end{align*}</p>
</blockquote>
<p>and we finally conclude
\begin{align*}
\frac{z}{(1+z^3)^2}&=\sum_{j=0}^2\frac{\alpha_j z+\beta_j}{(z-\zeta_j)^2}\\
&=\frac{1}{9}\left(
-\frac{z+2}{(z+1)^2}
+\frac{\overline{\zeta_1}z-2}{\left(z-\zeta_1\right)^2}
+\frac{\zeta_1z-2}{\left(z-\overline{\zeta_1}\right)^2}
\right)
\end{align*}</p>
Answer 16: <p>Notice that</p>
<p>$$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$</p>
<p>This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in <a href="https://math.stackexchange.com/questions/2048360/given-some-power-series-how-do-i-make-one-centered-at-a-different-point">this answer</a> to get the expansion at $z=1$.</p>
<p>$$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$</p>
<p>where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$.</p>
<p>However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields</p>
<p>$$\frac14\stackrel?=1-2+3-4+5-6+\dots$$</p>
<p>However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler sum</a> to it:</p>
<p>$$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$</p>
<p>which now converges as desired.</p>
|
Prove that $x+y+z=3$ $\implies$ $x^xy^yz^z\ge1$ | 2,091,063 | 5 | https://math.stackexchange.com/questions/2091063/prove-that-xyz-3-implies-xxyyzz-ge1 | <p>If $x,y,z$ are positive real numbers such that $x+y+z=3$, prove that
$$x^xy^yz^z\ \ge\ 1$$
<strong>My attempt</strong><br>
This is equivalent to
$$x\ln x + y\ln y + z\ln z\ \ge\ 0$$
By AM-GM</p>
<p>$\dfrac{x\ln x + y\ln y + z\ln z}3 \ge \sqrt[3]{xyz(\ln x)(\ln y)(\ln z)}$</p>
<p>I don't know what to do next.</p>
| [
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
},
{
"answer_id": 2091067,
"score": 4,
"text": "<p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>\n"
},
{
"answer_id": 2091070,
"score": 4,
"text": "<p><strong>Hint</strong> : </p>\n\n<p>$x \\longmapsto x\\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>\n"
},
{
"answer_id": 2091095,
"score": 1,
"text": "<p>$1+\\ln x$ is injective on $x \\in (0,\\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>\n\n<p>From Lagrange multipliers we must solve the system,</p>\n\n<p>$$x^x(1+\\ln x)y^yz^z=\\lambda$$</p>\n\n<p>$$y^y(1+\\ln y)x^xz^z=\\lambda$$</p>\n\n<p>$$z^z(1+\\ln z)x^xy^y=\\lambda$$</p>\n\n<p>This means we must have $1+\\ln x=1+\\ln y=1+\\ln z$. </p>\n\n<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>\n"
},
{
"answer_id": 2091220,
"score": 1,
"text": "<p>Let $f(x)=x\\ln{x}$. </p>\n\n<p>Hence, $f''(x)=\\frac{1}{x}>0$, which says that $f$ is a convex function.</p>\n\n<p>Thus, by Jensen $$\\frac{\\sum\\limits_{cyc}x\\ln{x}}{3}\\geq\\frac{x+y+z}{3}\\ln\\frac{x+y+z}{3}=0$$ and we are done!</p>\n"
}
] | Question: Prove that $x+y+z=3$ $\implies$ $x^xy^yz^z\ge1$
Answer 1: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 2: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 3: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 4: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 5: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 6: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 7: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 8: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 9: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 10: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 11: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 12: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 13: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 14: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 15: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 16: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 17: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 18: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 19: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 20: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 21: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 22: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 23: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 24: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 25: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 26: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 27: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 28: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
Answer 29: <p>You can't use AM–GM: some of the terms on the LHS may be negative. Try Jensen's inequality instead.</p>
Answer 30: <p><strong>Hint</strong> : </p>
<p>$x \longmapsto x\ln(x)$ is convex. Use convexity inequality aka Jensen's inequality to prove your result.</p>
Answer 31: <p>$1+\ln x$ is injective on $x \in (0,\infty)$ so it follows by Lagrange multipliers, that the minimum of $x^xy^yz^z$ subject to $x+y+z=3$ must occur when $x=y=z$:</p>
<p>From Lagrange multipliers we must solve the system,</p>
<p>$$x^x(1+\ln x)y^yz^z=\lambda$$</p>
<p>$$y^y(1+\ln y)x^xz^z=\lambda$$</p>
<p>$$z^z(1+\ln z)x^xy^y=\lambda$$</p>
<p>This means we must have $1+\ln x=1+\ln y=1+\ln z$. </p>
<p>With the bordered hessian technique you can show that $x=y=z=1$ does indeed give a minimum of $x^xy^yz^z=1$.</p>
Answer 32: <p>Let $f(x)=x\ln{x}$. </p>
<p>Hence, $f''(x)=\frac{1}{x}>0$, which says that $f$ is a convex function.</p>
<p>Thus, by Jensen $$\frac{\sum\limits_{cyc}x\ln{x}}{3}\geq\frac{x+y+z}{3}\ln\frac{x+y+z}{3}=0$$ and we are done!</p>
|
An equality in matrix theory (also interesting in analysis) | 2,091,224 | 5 | https://math.stackexchange.com/questions/2091224/an-equality-in-matrix-theory-also-interesting-in-analysis | <p>Let $M_{n\times n}$ be a real, symmetric matrix and
$$
S=\left\{ \left(u_{1},\ldots,u_{k}\right)\Bigl|0\leq k\leq n,\left\{ u_{1},\ldots,u_{k}\right\} \textrm{ is an orthonormal subset in } \mathbb{R}^{n}\right\}.
$$
Prove that
$$
\sum_{\lambda\textrm{ are positive eigenvalues of }M}\lambda=\max_{S}\left\{ u_{1}Mu_{1}^{T}+\ldots+u_{k}Mu_{k}^{T}\right\} .
$$</p>
<p>A very special case is also interesting: Let $\lambda_{1}$ and $\lambda_{2}$ are positive, prove that
$$
\lambda_{1}\left(a_{11}^{2}+a_{21}^{2}\right)+\lambda_{2}\left(a_{12}^{2}+a_{22}^{2}\right)\leq\lambda_{1}+\lambda_{2},
$$
where $a_{11}a_{21}+a_{12}a_{22}=0$ and $a_{11}^{2}+a_{12}^{2}=a_{21}^{2}+a_{22}^{2}=1.$ </p>
| [
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
},
{
"answer_id": 2179706,
"score": 2,
"text": "<p>The question in matrix format is to prove that\n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^TMU.\n$$\nThe unitary diagonalization $M=Q\\Lambda Q^T$ reduces the problem to proving that \n$$\n\\sum \\lambda_+(M)=\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U=\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T.\\tag{*}\n$$\nAs @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\\lambda$'s we can prove that\n$$\n\\sum \\lambda_+(M)\\le\\max_{U^TU=I}\\operatorname{tr}U^T\\Lambda U.\n$$\nTo prove the opposite we introduce the set of $n\\times n$ matrices\n$$\n{\\cal W}=\\{W\\colon W=W^T,\\, 0\\le W\\le I\\}\n$$\nwhere the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have\n$$\n\\operatorname{tr}\\Lambda W=\\sum_{i=1}^n\\lambda_i W_{ii}.\n$$\nThe maximization over ${\\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,\n$$\n\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W=\\sum\\lambda_+(M).\n$$\nFinally, we rewrite the RHS in $(*)$ as\n$$\n\\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nand notice that $W=UU^T\\in{\\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\\ge 0$ is clear. Furthermore, $U^TU=I$ $\\Rightarrow$ $\\|U\\|=1$ $\\Rightarrow$ $\\|U^T\\|=1$ $\\Rightarrow$ $UU^T\\le I$ $\\Rightarrow$ $I-UU^T\\ge 0$. Thus,\n$$\n\\sum\\lambda_+(M)=\\max_{W\\in{\\cal W}}\\operatorname{tr}\\Lambda W\\ge \\max_{U^TU=I}\\operatorname{tr}\\Lambda UU^T\n$$\nSince the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>\n"
},
{
"answer_id": 2091402,
"score": 1,
"text": "<p>By properties of eigensystems, we have $Mu_i = \\lambda_i u_i$ for orthonormal eigenvectors $\\{u_1, \\cdots, u_n\\}$, hence $u_i^\\intercal M u_i = \\lambda_i$. For any $U \\in S_M$, where $S_M \\subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>\n"
}
] | Question: An equality in matrix theory (also interesting in analysis)
Answer 1: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 2: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 3: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 4: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 5: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 6: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 7: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 8: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 9: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 10: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 11: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 12: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 13: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 14: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
Answer 15: <p>The question in matrix format is to prove that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^TMU.
$$
The unitary diagonalization $M=Q\Lambda Q^T$ reduces the problem to proving that
$$
\sum \lambda_+(M)=\max_{U^TU=I}\operatorname{tr}U^T\Lambda U=\max_{U^TU=I}\operatorname{tr}\Lambda UU^T.\tag{*}
$$
As @Tom Chen has answered, building the matrix $U$ of those columns of the identity matrix that correspond to positive $\lambda$'s we can prove that
$$
\sum \lambda_+(M)\le\max_{U^TU=I}\operatorname{tr}U^T\Lambda U.
$$
To prove the opposite we introduce the set of $n\times n$ matrices
$$
{\cal W}=\{W\colon W=W^T,\, 0\le W\le I\}
$$
where the inequalities are understood in the semidefinite sense, i.e. $W$ and $I-W$ are positively semidefinite matrices. Such matrices, in particular, have the diagonal elements being between $0$ and $1$ (as a simple conclusion by definition of positively semidefiniteness). We have
$$
\operatorname{tr}\Lambda W=\sum_{i=1}^n\lambda_i W_{ii}.
$$
The maximization over ${\cal W}$ is trivial - the best choice is to take $W_{ii}=1$ for a positive $\lambda_i$ and $0$ otherwise for all other $W_{ij}$. Thus,
$$
\max_{W\in{\cal W}}\operatorname{tr}\Lambda W=\sum\lambda_+(M).
$$
Finally, we rewrite the RHS in $(*)$ as
$$
\max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
and notice that $W=UU^T\in{\cal W}$ if $U^TU=I$. Indeed, $W=W^T$ and $W\ge 0$ is clear. Furthermore, $U^TU=I$ $\Rightarrow$ $\|U\|=1$ $\Rightarrow$ $\|U^T\|=1$ $\Rightarrow$ $UU^T\le I$ $\Rightarrow$ $I-UU^T\ge 0$. Thus,
$$
\sum\lambda_+(M)=\max_{W\in{\cal W}}\operatorname{tr}\Lambda W\ge \max_{U^TU=I}\operatorname{tr}\Lambda UU^T
$$
Since the maximum over a subset is smaller. It gives the opposite inequality that proves $(*)$.</p>
Answer 16: <p>By properties of eigensystems, we have $Mu_i = \lambda_i u_i$ for orthonormal eigenvectors $\{u_1, \cdots, u_n\}$, hence $u_i^\intercal M u_i = \lambda_i$. For any $U \in S_M$, where $S_M \subseteq S$ is the restriction of $S$ to eigenvectors of $M$, we have $UMU^\intercal$ corresponds to some sum of a subset of eigenvalues of $M$. Clearly the maximum sum is the sum of the positive eigenvalues of $M$, and the conclusion follows.</p>
|
Probability of selecting a ball from a box. | 2,091,412 | 5 | https://math.stackexchange.com/questions/2091412/probability-of-selecting-a-ball-from-a-box | <p>Suppose we have <span class="math-container">$12$</span> identical balls. Think of the balls as having distinct ID numbers, <span class="math-container">$1-12$</span>. We have <span class="math-container">$3$</span> identical boxes, each of which can contain exactly <span class="math-container">$4$</span> balls. One after the other the balls are thrown at random into one of the three boxes.</p>
<p>Suppose Mr. A will buy one ball from each box, so he will buy a total of three balls from the three boxes. Habit of Mr. A is to buy that ball from the box which ID is smallest of all other balls in that box.</p>
<p>For example, the randomly thrown balls in a box have ID <span class="math-container">$4$</span>, <span class="math-container">$8$</span>, <span class="math-container">$9$</span>, and <span class="math-container">$12$</span>. So Mr. A will buy the ball with ID <span class="math-container">$4$</span> from that box.</p>
<p>Then Mr. A goes to another box and here the balls have ID <span class="math-container">$1$</span>, <span class="math-container">$3$</span>, <span class="math-container">$6$</span>, and <span class="math-container">$11$</span>. He will buy the ball with ID <span class="math-container">$1$</span> from that box.</p>
<p>At last, Mr. A goes to the last box and here the balls have ID <span class="math-container">$2$</span>, <span class="math-container">$5$</span>, <span class="math-container">$7$</span>, and <span class="math-container">$10$</span>. He will buy the ball with ID <span class="math-container">$2$</span> from that box.</p>
<p>My question is:</p>
<blockquote>
<p>What is the probability that Mr. A will buy the ball with ID <span class="math-container">$i$</span>, where <span class="math-container">$i=1,2,\ldots,12$</span>?</p>
</blockquote>
<p>What I think is that we need to consider the following two things to calculate the probability:</p>
<blockquote>
<p><span class="math-container">$(1)$</span> the position of a ball in the box. Mr. A will buy the ball with ID number <span class="math-container">$2$</span> if and only if the box does not contain the ball with ID number <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$(2)$</span> If Mr. A buy a ball with smallest ID from the first box in front of him, then are there <span class="math-container">$11$</span> balls remaining to buy another two balls or are there <span class="math-container">$8$</span> balls (excluding all <span class="math-container">$4$</span> balls from the first box)?</p>
</blockquote>
| [
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
},
{
"answer_id": 2091488,
"score": 6,
"text": "<p>I got the following answer:</p>\n\n<blockquote>\n <p>The probability that Mr. A will buy the ball with id $i$ is,\n $$p(i) := \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,. $$</p>\n</blockquote>\n\n<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>\n\n<p>Total number of ways the balls can be distributed among three boxes is $$n := \\frac{12!}{(4!)^3}\\,.$$</p>\n\n<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>\n\n<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:\n$$n_1(i) := \\frac{4}{(4!)^3} \\left(\\begin{array}{c} 8 \\\\ i-1 \\end{array}\\right) (i-1)! (12-i)!\\,.$$\nTherefore, the probability of the $i$'th ball being bought from box one is:\n$$p_1(i) := \\frac{n_1(i)}{n}\\,.$$\nThe $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:\n$$p(i) := 3 p_1(i) = \\left( \\begin{array}{c} 8 \\\\ i-1 \\end{array} \\right) \\frac{(12-i)!(i-1)!}{11!}\\,.$$</p>\n"
},
{
"answer_id": 2091546,
"score": 6,
"text": "<p>Ball $i$ is thrown into a box with $3$ other balls. There are $\\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is\n$$\\frac{\\binom{12-i}3}{\\binom{11}3}=\\frac{(12-i)(11-i)(10-i)}{990}.$$\nThis is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>\n"
},
{
"answer_id": 2091857,
"score": 2,
"text": "<p>For $1\\lt j\\lt k$, there are $3!\\binom{12-k}{3}\\binom{8-j}{3}$ ways to buy $1,j,k$.</p>\n\n<p>$3!$ due to the arrangements of $1,j,k$. $\\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>\n\n<p>The number of ways for $n$ to be the greatest is\n$$\n\\begin{align}\n3!\\binom{12-n}{3}\\sum_{j=2}^{n-1}\\binom{8-j}{3}\n&=3!\\binom{12-n}{3}\\left[\\binom{7}{4}-\\binom{9-n}{4}\\right]\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{3}-3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nThe number of ways for $n$ to be the middle is\n$$\n\\begin{align}\n3!\\binom{8-n}{3}\\sum_{k=n+1}^9\\binom{12-k}{3}\n&=3!\\binom{8-n}{3}\\binom{12-n}{4}\\\\\n&=3!\\binom{7}{4}\\binom{12-n}{7}\n\\end{align}\n$$\nSince the total number of arrangements is $\\frac{12!}{(4!)^3}$, and $\\frac{3!\\binom{7}{4}}{\\frac{12!}{(4!)^3}}=\\frac1{165}$, the probability of buying the ball labeled $n$ would be\n$$\np(n)=\\frac1{165}\\binom{12-n}{3}\n$$</p>\n"
}
] | Question: Probability of selecting a ball from a box.
Answer 1: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 2: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 3: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 4: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 5: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 6: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 7: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 8: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 9: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 10: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 11: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 12: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 13: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 14: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 15: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 16: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 17: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 18: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 19: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 20: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 21: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
Answer 22: <p>I got the following answer:</p>
<blockquote>
<p>The probability that Mr. A will buy the ball with id $i$ is,
$$p(i) := \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,. $$</p>
</blockquote>
<p>If you know this answer to be incorrect, please feel free to stop reading :)</p>
<p>Total number of ways the balls can be distributed among three boxes is $$n := \frac{12!}{(4!)^3}\,.$$</p>
<p>Let us assume that there are $12$ numbered slots, one slot is to be occupied by exactly one ball, slots $1$ to $4$ constitute box $1$, slots $5$ to $8$ constitute box $2$ and the rest constitute box $3$. At the beginning we will consider all the slots to be distinguishable but at the end we will consider all the slots in the same box to be indistinguishable from each other.</p>
<p>We will count the number ways the $i$'th ball can be bought from box $1$. The $i$'th ball can be placed in either of the $4$ slots in box $1$. The balls with id $1$ to $(i-1)$ can be placed in box $2$ and $3$ in $\left( \begin{array}{c} 8 \\ i-1 \end{array} \right)(i-1)!$ ways. Balls with id $(i+1)$ to $12$ can be arranged in the rest of slots in $(12-i)!$ ways. Multiplying all these numbers we get $4 \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!$, in order to impose indistinguishability of the slots in the same box we must divide this number by $(4!)^3$. So, the number ways the $i$'th ball can be bought from box $1$ is:
$$n_1(i) := \frac{4}{(4!)^3} \left(\begin{array}{c} 8 \\ i-1 \end{array}\right) (i-1)! (12-i)!\,.$$
Therefore, the probability of the $i$'th ball being bought from box one is:
$$p_1(i) := \frac{n_1(i)}{n}\,.$$
The $i$'th ball can be bought from either of the three boxes, so the net probability of the $i$'th ball being bought is:
$$p(i) := 3 p_1(i) = \left( \begin{array}{c} 8 \\ i-1 \end{array} \right) \frac{(12-i)!(i-1)!}{11!}\,.$$</p>
Answer 23: <p>Ball $i$ is thrown into a box with $3$ other balls. There are $\binom{11}3$ possible combinations for the three balls in the same box with ball $i,$ and all of them are equally likely. The man will buy ball $i$ if the three other balls have numbers greater than $i$. There are $12-i$ balls with numbers greater than $i,$ so there are $\binom{12-i}3$ favorable cases, and the probability that the man buys ball $i$ is
$$\frac{\binom{12-i}3}{\binom{11}3}=\frac{(12-i)(11-i)(10-i)}{990}.$$
This is equivalent to the answer posted earlier by Nafiz Ishtiaque Ornob.</p>
Answer 24: <p>For $1\lt j\lt k$, there are $3!\binom{12-k}{3}\binom{8-j}{3}$ ways to buy $1,j,k$.</p>
<p>$3!$ due to the arrangements of $1,j,k$. $\binom{12-k}{3}$ due to the number of ways to choose $3$ balls greater than $k$. $\binom{8-j}{3}$ due to the number of ways to choose $3$ balls greater than $j$ from those left over from choosing $3$ greater than $k$.</p>
<p>The number of ways for $n$ to be the greatest is
$$
\begin{align}
3!\binom{12-n}{3}\sum_{j=2}^{n-1}\binom{8-j}{3}
&=3!\binom{12-n}{3}\left[\binom{7}{4}-\binom{9-n}{4}\right]\\
&=3!\binom{7}{4}\binom{12-n}{3}-3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
The number of ways for $n$ to be the middle is
$$
\begin{align}
3!\binom{8-n}{3}\sum_{k=n+1}^9\binom{12-k}{3}
&=3!\binom{8-n}{3}\binom{12-n}{4}\\
&=3!\binom{7}{4}\binom{12-n}{7}
\end{align}
$$
Since the total number of arrangements is $\frac{12!}{(4!)^3}$, and $\frac{3!\binom{7}{4}}{\frac{12!}{(4!)^3}}=\frac1{165}$, the probability of buying the ball labeled $n$ would be
$$
p(n)=\frac1{165}\binom{12-n}{3}
$$</p>
|
Find the limit of $6^n(2-x_n)$ where $x_n=\sqrt[3]{6+\sqrt[3]{6+\dots+\sqrt[3]{6}}}$ with $n$ roots | 2,091,544 | 5 | https://math.stackexchange.com/questions/2091544/find-the-limit-of-6n2-x-n-where-x-n-sqrt36-sqrt36-dots-sqrt36 | <blockquote>
<p>Let <span class="math-container">$x_n=\sqrt[3]{6+\sqrt[3]{6+\dots+\sqrt[3]{6}}}$</span> where the expression in the RHS has <span class="math-container">$n$</span> roots.</p>
<p>Find the following limit: <span class="math-container">$\lim \limits_{n\to \infty}6^n(2-x_n)$</span></p>
</blockquote>
<p><strong>My approach:</strong> I had two approaches. The first one was the following: I showed that <span class="math-container">$x_n$</span> is increasing with upper bound which is equal 2 then by Weierstrass theorem its convergent with limit <span class="math-container">$2$</span>. But we cannot deduce from that the limit of <span class="math-container">$6^n(2-x_n)$</span> is zero because the last expression is uncertainty.</p>
<p>The second one was that <span class="math-container">$x_{n+1}^3=6+x_n$</span> then <span class="math-container">$x_{n+1}^3-8=x_n-2$</span>. From the last equation we get: <span class="math-container">$(x_{n+1}-2)(x_{n+1}^2+2x_{n+1}+4)=x_n-2$</span>. I tried work out with this but I have stuck.</p>
<p>Would be very grateful for hints or solutions.</p>
| [
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
},
{
"answer_id": 2091580,
"score": 2,
"text": "<p>Lets look at the sequence\n$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$\nNow $$\\lim_{n\\to\\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\\lim_{n\\to\\infty}12\\cdot 6^n(2-x_n)$$ so lets instead look at the $\\lim a_n$ and whatever the limit is just multiply it by $12$.\nWe have that\n$$\\frac{a_{n+1}}{a_n}=\\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\\frac{6}{x_n^2+2x_n+4}$$\nSince $$x_n^2+2x_n+4\\geq6^{2/3}+2\\cdot 6^{1/3}+4\\geq 10$$ since it's increasing.</p>\n\n<p>From this we can see that each $a_n$ is decreasing by $\\frac{6}{10}$ at least so we have that\n$$a_0(\\frac{6}{10})^n\\geq a_n> 0 $$ by squeeze theorem we have that $a_n\\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>\n"
},
{
"answer_id": 2091618,
"score": 2,
"text": "<p>The recurrence says</p>\n\n<p>$$x_n=\\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\\le2$,</p>\n\n<p>$$t_n=2-\\sqrt[3]{8-t_{n-1}}=\\frac{t_{n-1}}{4+2\\sqrt[3]{8-t_{n-1}}+\\left(\\sqrt[3]{8-t_{n-1}}\\right)^2}<\\frac{t_{n-1}}{4+2\\sqrt[3]6+\\sqrt[3]{36}}.$$</p>\n\n<p>This clearly decays faster than $6^{-n}$.</p>\n"
},
{
"answer_id": 2091576,
"score": 1,
"text": "<p>takes $f(x)= \\sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$\nand $$|f'(x) |= \\frac{1}{3\\sqrt[3]{(x+6)^2}} < \\frac{1}{3\\sqrt[3]{36}} <\\frac{1}{3\\sqrt[3]{2^3\\times 3}}=\\frac{1}{6\\sqrt[3]{3}}$$\nso that \nfor any $x,y $ one has\n$$|f(x)-f(y)| = |\\int_x^yf'(s)ds|\\le \\frac{1}{6\\sqrt[3]{3}}|x-y|$$\nshow then by induction using this inequality and the fact that $f(2)=2$ \n$$|x_n-2| \\le \\frac{1}{6^n\\sqrt[3]{3^n}}|x_0-2|$$\nso $$6^n| x_n-2|\\le \\frac{1}{\\sqrt[3]{3^n}}|x_0-2| \\to 0 $$</p>\n"
}
] | Question: Find the limit of $6^n(2-x_n)$ where $x_n=\sqrt[3]{6+\sqrt[3]{6+\dots+\sqrt[3]{6}}}$ with $n$ roots
Answer 1: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 2: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 3: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 4: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 5: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 6: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 7: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 8: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 9: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 10: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 11: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 12: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 13: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 14: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 15: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 16: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 17: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 18: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 19: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 20: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 21: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
Answer 22: <p>Lets look at the sequence
$$a_n=6^n(8-x_n^3)=6^n(2-x_n)(4+2x_n+x_n^2)$$
Now $$\lim_{n\to\infty}6^n(2-x_n)(4+2x_n+x_n^2)=\lim_{n\to\infty}12\cdot 6^n(2-x_n)$$ so lets instead look at the $\lim a_n$ and whatever the limit is just multiply it by $12$.
We have that
$$\frac{a_{n+1}}{a_n}=\frac{6^{n+1}(8-x_{n+1}^2)}{6^n(8-x_n^2)}=\frac{6(2-x_n)}{(2-x_n)(x_n^2+2x_n+4)}=\frac{6}{x_n^2+2x_n+4}$$
Since $$x_n^2+2x_n+4\geq6^{2/3}+2\cdot 6^{1/3}+4\geq 10$$ since it's increasing.</p>
<p>From this we can see that each $a_n$ is decreasing by $\frac{6}{10}$ at least so we have that
$$a_0(\frac{6}{10})^n\geq a_n> 0 $$ by squeeze theorem we have that $a_n\to 0$ so the original limit tends to $0$.A side note I picked $a_n$ to make calculations easier the same could be done with the original limit.</p>
Answer 23: <p>The recurrence says</p>
<p>$$x_n=\sqrt[3]{6+x_{n-1}}$$ or, in terms of $t_n=2-x_n$ with $t_n\le2$,</p>
<p>$$t_n=2-\sqrt[3]{8-t_{n-1}}=\frac{t_{n-1}}{4+2\sqrt[3]{8-t_{n-1}}+\left(\sqrt[3]{8-t_{n-1}}\right)^2}<\frac{t_{n-1}}{4+2\sqrt[3]6+\sqrt[3]{36}}.$$</p>
<p>This clearly decays faster than $6^{-n}$.</p>
Answer 24: <p>takes $f(x)= \sqrt[3]{x+6}$ then see that $x_{n+1} = f(x_n)$
and $$|f'(x) |= \frac{1}{3\sqrt[3]{(x+6)^2}} < \frac{1}{3\sqrt[3]{36}} <\frac{1}{3\sqrt[3]{2^3\times 3}}=\frac{1}{6\sqrt[3]{3}}$$
so that
for any $x,y $ one has
$$|f(x)-f(y)| = |\int_x^yf'(s)ds|\le \frac{1}{6\sqrt[3]{3}}|x-y|$$
show then by induction using this inequality and the fact that $f(2)=2$
$$|x_n-2| \le \frac{1}{6^n\sqrt[3]{3^n}}|x_0-2|$$
so $$6^n| x_n-2|\le \frac{1}{\sqrt[3]{3^n}}|x_0-2| \to 0 $$</p>
|
If $f$ is continuous and $f(A)$ is open, is $A$ necessarily open? | 2,091,649 | 5 | https://math.stackexchange.com/questions/2091649/if-f-is-continuous-and-fa-is-open-is-a-necessarily-open | <p>In the context of general topology, a function is said to be <strong>continuous</strong> if the pre-image of any open subset is also an open subset. Of course, this does not say that the image of an open set must be an open set (and in fact, this is not true in general). But, what about the statement <strong>"all sets whose image is open must be open"</strong>? This is a different statement, and I don't know what to think of it intuitively. My thoughts:</p>
<p>Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ continuous. Let $A$ be a set such that $f(A)$ is open. I am trying to decide whether or not $A$ is necessarily open. I know from set theory that $A \subseteq f^{-1}(f(A))$, and the fact that it is not an equality is very important (if this was an equality, the statement would be immediately true). With this, I am able to conclude that $A$ is a subset of the open set $f^{-1}(f(A))$, and apparently I can't conclude anything better. With this in mind, it seems to me that the statement is probably false, but I was unable to find any counterexample either.</p>
| [
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
},
{
"answer_id": 2091651,
"score": 5,
"text": "<p>Let $A=(-2,0)\\cup(0,1]$ (isn't open in $\\mathbb R$, and $f(x)=x^2$ (continuous on $\\mathbb R$). Then $f(A)=(0,4)$ which is open in $\\mathbb R$.</p>\n"
},
{
"answer_id": 2091661,
"score": 5,
"text": "<p>Since you ask it in a general context let me add a simple abstract example. </p>\n\n<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \\to Y$ is continuous. </p>\n\n<p>We have $f(A)$ is open for every $A \\subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>\n"
},
{
"answer_id": 2091654,
"score": 2,
"text": "<p>Counterexample: The function $f:\\mathbb R\\to\\mathbb R, x\\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>\n"
}
] | Question: If $f$ is continuous and $f(A)$ is open, is $A$ necessarily open?
Answer 1: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 2: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 3: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 4: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 5: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 6: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 7: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 8: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 9: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 10: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 11: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 12: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 13: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 14: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 15: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 16: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 17: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 18: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 19: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 20: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 21: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
Answer 22: <p>Let $A=(-2,0)\cup(0,1]$ (isn't open in $\mathbb R$, and $f(x)=x^2$ (continuous on $\mathbb R$). Then $f(A)=(0,4)$ which is open in $\mathbb R$.</p>
Answer 23: <p>Since you ask it in a general context let me add a simple abstract example. </p>
<p>Let $Y$ be a topological space with a unique element. Let $X$ be any topological space. Then the unique $f:X \to Y$ is continuous. </p>
<p>We have $f(A)$ is open for every $A \subset X$. Thus, as soon as in $X$ <em>not every</em> set is an open set, you have a counterexample. </p>
Answer 24: <p>Counterexample: The function $f:\mathbb R\to\mathbb R, x\mapsto x^2$ obviously is continuous. Be $S=[-2,-1)\cup (2,3)$. This is clearly not an open set, but $f(S)=(1,9)$ which is open.</p>
|
Calculate $\int_{-\infty}^\infty\frac{\sin t}{t(1+t^2)}\,\mathrm{d}t$ | 2,091,924 | 5 | https://math.stackexchange.com/questions/2091924/calculate-int-infty-infty-frac-sin-tt1t2-mathrmdt | <blockquote>
<p>Evaluate <span class="math-container">$\displaystyle\int_{-\infty}^\infty\frac{\sin(t)}{t(1+t^2)}\,\mathrm{d}t.$</span></p>
</blockquote>
<p><span class="math-container">$$\int_{-\infty}^\infty\frac{\sin t}{t(1+t^2)}\,\mathrm{d}t= \mathfrak{Im}\left\{\int_{-\infty}^\infty\frac{\mathrm{e}^{\mathrm{i}t}}{t(1+t^2)} \,\mathrm{d}t\right\}$$</span>
To get the roots of the denominator, we consider <span class="math-container">$t(1+t^2)=0$</span>. Then, the roots are <span class="math-container">$\{0,\mathrm{i},-\mathrm{i}\}$</span>.</p>
<p>I am going to calculate the integral by determining residues associated with <span class="math-container">$z=0$</span> and <span class="math-container">$z=\mathrm{i}$</span>.</p>
<p><span class="math-container">\begin{eqnarray}
\mathfrak{Im}\left\{\int_{-\infty}^\infty\frac{\mathrm{e}^{\mathrm{i}t}}{t(1+t^2)} \,\mathrm{d}t\right\}
&=&\mathfrak{Im}\left\{\pi\mathrm{i}\lim_{t\to0}\frac{\mathrm{e}^{\mathrm{i}t}}{1+t^2}+2 \pi\mathrm{i}\lim_{t\to\mathrm{i}}\frac{\mathrm{e}^{\mathrm{i}t}}{t(t+\mathrm{i})}\right\}\\
&=&\mathfrak{Im}\left\{\pi\mathrm{i}+ 2\pi\mathrm{i}\frac{\mathrm{e}^{-1}}{-2}\right\}\\
&=&\pi(1-\mathrm{e}^{-1})
\end{eqnarray}</span></p>
<p>Is it correct?</p>
<p>How can I apply Plancherel's theorem to calculate the same integral?</p>
<p>Thanks!</p>
| [
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
},
{
"answer_id": 2091937,
"score": 4,
"text": "<p>Since $\\mathcal{L}(\\sin t)=\\frac{1}{1+s^2}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{t(t^2+1)}\\right)=1-\\cos(s)$, the value of the integral is\n$$ 2\\int_{0}^{+\\infty}\\frac{1-\\cos(s)}{1+s^2}\\,ds = \\pi-2\\int_{0}^{+\\infty}\\frac{\\cos(s)}{1+s^2} =\\color{red}{\\pi\\left(1-\\frac{1}{e}\\right).}$$</p>\n"
},
{
"answer_id": 2092306,
"score": 4,
"text": "<p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>\n\n<p>The Fourier transform of $f(x) := \\chi_{[-1,1]}(x)$ is easily computed to be $$\\hat f (t) = \\sqrt{\\frac{2}{\\pi}} \\frac{\\sin(t)}{t},$$ and the Fourier transform of $g(x) := \\exp(-|x|)$ is $$\\hat g(t) =\\frac{1}{\\sqrt{2\\pi}} \\frac{2}{1+t^2}$$ (See <a href=\"https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2\">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>\n\n<p>$$\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)}\\mathrm{d}t = \\frac{\\pi}{2} \\cdot (\\hat f, \\hat g)_{L^2} = \\frac{\\pi}{2}\\cdot (f,g)_{L^2} = \\pi \\int_{0}^{1} e^{-x} ~\\mathrm{d}x = \\pi(1-e^{-1}) $$</p>\n"
},
{
"answer_id": 2091938,
"score": 3,
"text": "<p>I verified your result using</p>\n\n<p>\\begin{align}\n\\int_{-\\infty}^{\\infty} \\frac{\\sin(t)}{t(1+t^2)} dt &= 2\\int^\\infty_0 \\frac{t\\sin(t)}{t^2(1+t^2)}\\,dt\\\\\n& = 2\\int^\\infty_0 \\frac{\\sin(t)}{t}\\,dt - 2\\int^\\infty_0 \\frac{t\\sin(t)}{1+t^2} \\,dt\\\\&= \\pi -\\frac{\\pi}{e}\n\\end{align}</p>\n\n<p>Both integrals are standards in contour integration. </p>\n"
},
{
"answer_id": 2092239,
"score": 1,
"text": "<p>Let us consider $$I\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(xt\\right)}{t\\left(1+t^{2}\\right)}dt$$ then $$I'\\left(x\\right)=2\\int_{0}^{\\infty}\\frac{\\cos\\left(xt\\right)}{1+t^{2}}dt$$ and following <a href=\"https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426\">this proof</a> we get $$I'\\left(x\\right)=\\pi e^{-x}\n $$ then $$I\\left(x\\right)=-\\pi e^{-x}+C\n $$ and since $I\\left(0\\right)=0\n $ we have $$C=\\pi\n $$ so $$I\\left(1\\right)=2\\int_{0}^{\\infty}\\frac{\\sin\\left(t\\right)}{t\\left(1+t^{2}\\right)}dt=\\color{blue}{\\pi-\\frac{\\pi}{e}}.$$</p>\n"
}
] | Question: Calculate $\int_{-\infty}^\infty\frac{\sin t}{t(1+t^2)}\,\mathrm{d}t$
Answer 1: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 2: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 3: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 4: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 5: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 6: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 7: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 8: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 9: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 10: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 11: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 12: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 13: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 14: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 15: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 16: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 17: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 18: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 19: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 20: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 21: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 22: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 23: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 24: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 25: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 26: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 27: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 28: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
Answer 29: <p>Since $\mathcal{L}(\sin t)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{t(t^2+1)}\right)=1-\cos(s)$, the value of the integral is
$$ 2\int_{0}^{+\infty}\frac{1-\cos(s)}{1+s^2}\,ds = \pi-2\int_{0}^{+\infty}\frac{\cos(s)}{1+s^2} =\color{red}{\pi\left(1-\frac{1}{e}\right).}$$</p>
Answer 30: <p>It seems no one has yet adressed the question of how to do this using Plancherel's theorem. Here it goes:</p>
<p>The Fourier transform of $f(x) := \chi_{[-1,1]}(x)$ is easily computed to be $$\hat f (t) = \sqrt{\frac{2}{\pi}} \frac{\sin(t)}{t},$$ and the Fourier transform of $g(x) := \exp(-|x|)$ is $$\hat g(t) =\frac{1}{\sqrt{2\pi}} \frac{2}{1+t^2}$$ (See <a href="https://math.stackexchange.com/questions/377073/calculate-the-fourier-transform-of-bx-frac1x2-a2">this question</a> for a proof of the latter). Using Plancherel's theorem, we obtain</p>
<p>$$\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)}\mathrm{d}t = \frac{\pi}{2} \cdot (\hat f, \hat g)_{L^2} = \frac{\pi}{2}\cdot (f,g)_{L^2} = \pi \int_{0}^{1} e^{-x} ~\mathrm{d}x = \pi(1-e^{-1}) $$</p>
Answer 31: <p>I verified your result using</p>
<p>\begin{align}
\int_{-\infty}^{\infty} \frac{\sin(t)}{t(1+t^2)} dt &= 2\int^\infty_0 \frac{t\sin(t)}{t^2(1+t^2)}\,dt\\
& = 2\int^\infty_0 \frac{\sin(t)}{t}\,dt - 2\int^\infty_0 \frac{t\sin(t)}{1+t^2} \,dt\\&= \pi -\frac{\pi}{e}
\end{align}</p>
<p>Both integrals are standards in contour integration. </p>
Answer 32: <p>Let us consider $$I\left(x\right)=2\int_{0}^{\infty}\frac{\sin\left(xt\right)}{t\left(1+t^{2}\right)}dt$$ then $$I'\left(x\right)=2\int_{0}^{\infty}\frac{\cos\left(xt\right)}{1+t^{2}}dt$$ and following <a href="https://math.stackexchange.com/questions/1972091/prove-a-integral-equation/1972426#1972426">this proof</a> we get $$I'\left(x\right)=\pi e^{-x}
$$ then $$I\left(x\right)=-\pi e^{-x}+C
$$ and since $I\left(0\right)=0
$ we have $$C=\pi
$$ so $$I\left(1\right)=2\int_{0}^{\infty}\frac{\sin\left(t\right)}{t\left(1+t^{2}\right)}dt=\color{blue}{\pi-\frac{\pi}{e}}.$$</p>
|
Solve $\int_0^{+\infty} \Big(\frac{\sin t}{t}\Big)^2 \ dt$ | 2,092,158 | 5 | https://math.stackexchange.com/questions/2092158/solve-int-0-infty-big-frac-sin-tt-big2-dt | <p>$$\int_0^{+\infty} \Big(\frac{\sin t}{t}\Big)^2 \ dt=\frac{1}{2} \int_{-\infty}^{+\infty} \Big(\frac{\sin t}{t}\Big)^2 \ dt=\frac{1}{2} \Big\rvert \Big\rvert \frac{\sin t}{t} \Big\rvert \Big\rvert_2^2 $$</p>
<p><br /></p>
<p>Plancherel theorem:</p>
<p>$$2 \pi \Big\rvert \Big\rvert f \Big\rvert\Big\rvert_2^2=\Big\rvert \Big\rvert \mathscr{F}(f) \Big\rvert\Big\rvert_2^2$$</p>
<p><br></p>
<p>$$p_2(x)=\begin{cases} 1 \qquad x \in [-1,1] \\ 0 \qquad x \notin [-1,1] \end{cases}$$
<br>
$$\frac{1}{2} \Big\rvert \Big\rvert \frac{\sin t}{t} \Big\rvert \Big\rvert_2^2=\frac{1}{2} 2\pi \Big\rvert \Big\rvert \frac{1}{2} \ p_2 \Big\rvert \Big\rvert_2^2=\frac{\pi}{2} $$</p>
<p><br ></p>
<p>Is there any other method to calculate this integral?</p>
<p>Thanks!</p>
| [
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
},
{
"answer_id": 2092179,
"score": 6,
"text": "<p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\\omega)$ constitute the Fourier Transform pair as given by </p>\n\n<p>$$\\begin{align}f(x)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty F(\\omega)e^{-i\\omega t}\\,d\\omega\\\\\\\\\nF(\\omega)&=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty f(t)e^{i\\omega t}\\,dt\n\\end{align}$$</p>\n\n<p>Note that we have the Fourier Transform pairs </p>\n\n<blockquote>\n <p>$$\\begin{align}\nf(x) &\\leftrightarrow F(\\omega)\\\\\\\\\nf^2(x) &\\leftrightarrow \\frac{1}{\\sqrt {2\\pi}}F(\\omega)*F(\\omega)\\\\\\\\\n\\frac{\\sin(x)}{x}&\\leftrightarrow \\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\\\\\\\\n\\frac{\\sin^2(x)}{x^2}&\\leftrightarrow \\frac{1}{\\sqrt{2\\pi}}\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)*\\left(\\sqrt{\\frac{\\pi}{2}}\\text{rect}(\\omega/2)\\right)=\\sqrt{\\frac\\pi8}2 \\text{tri}(\\omega/2) \\tag 1\n\\end{align}$$</p>\n</blockquote>\n\n<p>where $\\text{rect}(t)$ and $\\text{tri}(t)$ are the <a href=\"https://en.wikipedia.org/wiki/Rectangular_function\" rel=\"nofollow noreferrer\">Rectangle Function</a> and <a href=\"https://en.wikipedia.org/wiki/Triangular_function\" rel=\"nofollow noreferrer\">Triangle Function</a>, respectively.</p>\n\n<p>Finally, note that from $(1)$ we find</p>\n\n<p>$$\\int_0^\\infty \\frac{\\sin^2(x)}{x^2}\\,dx=\\frac12\\times \\sqrt{2\\pi}\\times \\sqrt{\\frac\\pi8}\\times 2=\\pi/2$$</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656\">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\\int_0^\\infty \\frac{\\sin^4(x)}{x^4}\\,dx$$</p>\n"
},
{
"answer_id": 2092197,
"score": 6,
"text": "<p>What about the good old integration by parts? We have:</p>\n\n<p>$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = \\int_{-\\infty}^{+\\infty}\\frac{2\\sin(x)\\cos(x)}{x} \\stackrel{x\\mapsto\\frac{t}{2}}{=} \\int_{-\\infty}^{+\\infty}\\frac{\\sin t}{t}\\stackrel{\\text{Dirichlet}}{=}\\color{red}{\\pi}.$$</p>\n\n<p>As an alternative, we may use the <a href=\"https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis\" rel=\"nofollow noreferrer\">Laplace transform</a>.<br> Since $\\mathcal{L}(\\sin^2 x)=\\frac{2}{s(4+s^2)}$ and $\\mathcal{L}^{-1}\\left(\\frac{1}{x^2}\\right)=s$, we have:\n$$ \\int_{-\\infty}^{+\\infty}\\frac{\\sin^2 x}{x^2}\\,dx = 2\\int_{0}^{+\\infty}\\frac{2\\,ds}{4+s^2} \\stackrel{s\\mapsto 2t}{=} 2\\int_{0}^{+\\infty}\\frac{dt}{1+t^2}=\\color{red}{\\pi}.$$</p>\n"
},
{
"answer_id": 2092211,
"score": 3,
"text": "<p>Over the top approach:</p>\n\n<p>Recoginze that</p>\n\n<p>$$\n\\frac{\\sin(t)}t=\\frac{1}{2}\\int_{-1}^1dxe^{i xt}\n$$</p>\n\n<p>Therefore</p>\n\n<p>$$\n2\\int_{0}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\int_{-\\infty}^{\\infty}\\frac{\\sin(t)^2}{t^2}dt=\\frac{1}{4}\\int_{-1}^1dx'\\int_{-1}^1dx\\int_{-\\infty}^{\\infty} dte^{i (x+x')t}=\\\\\\frac{\\pi}{2}\\int_{-1}^1dx'\\int_{-1}^1dx\\delta(x+x')=\\frac{\\pi}{2}\\int_{-1}^1dx'\\textbf{1}_{(1,-1)}=\\pi\n$$</p>\n\n<p>$\\textbf{1}_A$ denotes the indicator function on the set $A$ and $\\delta(u)$ is Dirac's delta distribution</p>\n\n<p>QED</p>\n"
},
{
"answer_id": 2092222,
"score": 3,
"text": "<p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty\\frac{\\sin^2(t)}{t^2}\\,dt&=\\frac14\\int_{-\\infty}^\\infty \\frac{1-\\cos(2t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12\\int_{-\\infty}^\\infty \\frac{1-\\cos(t)}{t^2}\\,dt\\\\\\\\\n&=\\frac12 \\lim_{L\\to \\infty,\\epsilon\\to 0}\\text{Re}\\left(\\int_{-L}^{-\\epsilon} \\frac{1-e^{it}}{t^2}\\,dt+\\int_{\\epsilon}^L \\frac{1-e^{it}}{t^2}\\,dt\\right)\\\\\\\\\n&=\\frac12 \\lim_{\\epsilon\\to 0}\\int_0^\\pi \\frac{1-e^{i\\epsilon^{i\\phi}}}{\\epsilon^2 e^{i2\\phi}}\\,i\\epsilon e^{i\\phi}\\,d\\phi-\\frac12 \\lim_{L\\to \\infty}\\int_0^\\pi \\frac{1-e^{iL^{i\\phi}}}{L^2 e^{i2\\phi}}\\,iL e^{i\\phi}\\,d\\phi\\\\\\\\\n&=\\pi/2\n\\end{align}$$</p>\n"
},
{
"answer_id": 2129671,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n<strong>$\\large 1)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\int_{-N\\pi}^{N\\pi}\n{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{k\\pi}^{\\pars{k + 1}\\pi}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\lim_{N \\to \\infty}\\sum_{k = -N}^{N - 1}\\int_{0}^{\\pi}{\\sin^{2}\\pars{t} \\over \\pars{t + k\\pi}^{2}}\\,\\dd t =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\n\\sum_{k = -\\infty}^{\\infty}{1 \\over \\pars{t + k\\pi}^{2}}\\,\\dd t\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\pi}\\sin^{2}\\pars{t}\\csc^{2}\\pars{t}\\,\\dd t =\n\\bbx{\\ds{\\pi \\over 2}}\n\\end{align}\n<hr>\n<strong>$\\large 2)$</strong>\n\\begin{align}\n\\int_{0}^{\\infty}{\\sin^{2}\\pars{t} \\over t^{2}}\\,\\dd t & =\n\\int_{0}^{\\infty}\\sin^{2}\\pars{t}\\pars{\\int_{0}^{\\infty}x\\expo{-tx}\\,\\dd x}\n\\dd t =\n\\int_{0}^{\\infty}x\\,\\Re\\int_{0}^{\\infty}\n{1 - \\expo{2t\\,\\ic} \\over 2}\\,\\expo{-tx}\\dd t\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2}\\int_{0}^{\\infty}x\\,{4/x \\over 4 + x^{2}}\\,\\dd x =\n\\int_{0}^{\\infty}{\\dd x \\over x^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</p>\n"
}
] | Question: Solve $\int_0^{+\infty} \Big(\frac{\sin t}{t}\Big)^2 \ dt$
Answer 1: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 2: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 3: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 4: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 5: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 6: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 7: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 8: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 9: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 10: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 11: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 12: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 13: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 14: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 15: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 16: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 17: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 18: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 19: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 20: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 21: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 22: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 23: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 24: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 25: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 26: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 27: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 28: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 29: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 30: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 31: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 32: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 33: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 34: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 35: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
Answer 36: <p>Application of the convolution theorem provides another way forward. Let $f(t)$ and $F(\omega)$ constitute the Fourier Transform pair as given by </p>
<p>$$\begin{align}f(x)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty F(\omega)e^{-i\omega t}\,d\omega\\\\
F(\omega)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t)e^{i\omega t}\,dt
\end{align}$$</p>
<p>Note that we have the Fourier Transform pairs </p>
<blockquote>
<p>$$\begin{align}
f(x) &\leftrightarrow F(\omega)\\\\
f^2(x) &\leftrightarrow \frac{1}{\sqrt {2\pi}}F(\omega)*F(\omega)\\\\
\frac{\sin(x)}{x}&\leftrightarrow \sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\\\\
\frac{\sin^2(x)}{x^2}&\leftrightarrow \frac{1}{\sqrt{2\pi}}\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)*\left(\sqrt{\frac{\pi}{2}}\text{rect}(\omega/2)\right)=\sqrt{\frac\pi8}2 \text{tri}(\omega/2) \tag 1
\end{align}$$</p>
</blockquote>
<p>where $\text{rect}(t)$ and $\text{tri}(t)$ are the <a href="https://en.wikipedia.org/wiki/Rectangular_function" rel="nofollow noreferrer">Rectangle Function</a> and <a href="https://en.wikipedia.org/wiki/Triangular_function" rel="nofollow noreferrer">Triangle Function</a>, respectively.</p>
<p>Finally, note that from $(1)$ we find</p>
<p>$$\int_0^\infty \frac{\sin^2(x)}{x^2}\,dx=\frac12\times \sqrt{2\pi}\times \sqrt{\frac\pi8}\times 2=\pi/2$$</p>
<hr>
<p><strong>NOTE:</strong></p>
<p>In <a href="https://math.stackexchange.com/questions/2077565/integrating-powers-of-frac-sin-xx-using-fourier-transforms/2077656#2077656">THIS ANSWER</a>, I applied the same approach to evaluate the integral $$\int_0^\infty \frac{\sin^4(x)}{x^4}\,dx$$</p>
Answer 37: <p>What about the good old integration by parts? We have:</p>
<p>$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = \int_{-\infty}^{+\infty}\frac{2\sin(x)\cos(x)}{x} \stackrel{x\mapsto\frac{t}{2}}{=} \int_{-\infty}^{+\infty}\frac{\sin t}{t}\stackrel{\text{Dirichlet}}{=}\color{red}{\pi}.$$</p>
<p>As an alternative, we may use the <a href="https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_integrals_over_the_positive_real_axis" rel="nofollow noreferrer">Laplace transform</a>.<br> Since $\mathcal{L}(\sin^2 x)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, we have:
$$ \int_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2}\,dx = 2\int_{0}^{+\infty}\frac{2\,ds}{4+s^2} \stackrel{s\mapsto 2t}{=} 2\int_{0}^{+\infty}\frac{dt}{1+t^2}=\color{red}{\pi}.$$</p>
Answer 38: <p>Over the top approach:</p>
<p>Recoginze that</p>
<p>$$
\frac{\sin(t)}t=\frac{1}{2}\int_{-1}^1dxe^{i xt}
$$</p>
<p>Therefore</p>
<p>$$
2\int_{0}^{\infty}\frac{\sin(t)^2}{t^2}dt=\int_{-\infty}^{\infty}\frac{\sin(t)^2}{t^2}dt=\frac{1}{4}\int_{-1}^1dx'\int_{-1}^1dx\int_{-\infty}^{\infty} dte^{i (x+x')t}=\\\frac{\pi}{2}\int_{-1}^1dx'\int_{-1}^1dx\delta(x+x')=\frac{\pi}{2}\int_{-1}^1dx'\textbf{1}_{(1,-1)}=\pi
$$</p>
<p>$\textbf{1}_A$ denotes the indicator function on the set $A$ and $\delta(u)$ is Dirac's delta distribution</p>
<p>QED</p>
Answer 39: <p>And yet another approach is to use contour integration. Note that using Cauchy's Integral Theorem, we can write</p>
<p>$$\begin{align}
\int_0^\infty\frac{\sin^2(t)}{t^2}\,dt&=\frac14\int_{-\infty}^\infty \frac{1-\cos(2t)}{t^2}\,dt\\\\
&=\frac12\int_{-\infty}^\infty \frac{1-\cos(t)}{t^2}\,dt\\\\
&=\frac12 \lim_{L\to \infty,\epsilon\to 0}\text{Re}\left(\int_{-L}^{-\epsilon} \frac{1-e^{it}}{t^2}\,dt+\int_{\epsilon}^L \frac{1-e^{it}}{t^2}\,dt\right)\\\\
&=\frac12 \lim_{\epsilon\to 0}\int_0^\pi \frac{1-e^{i\epsilon^{i\phi}}}{\epsilon^2 e^{i2\phi}}\,i\epsilon e^{i\phi}\,d\phi-\frac12 \lim_{L\to \infty}\int_0^\pi \frac{1-e^{iL^{i\phi}}}{L^2 e^{i2\phi}}\,iL e^{i\phi}\,d\phi\\\\
&=\pi/2
\end{align}$$</p>
Answer 40: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\large 1)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
{1 \over 2}\lim_{N \to \infty}\int_{-N\pi}^{N\pi}
{\sin^{2}\pars{t} \over t^{2}}\,\dd t =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{k\pi}^{\pars{k + 1}\pi}{\sin^{2}\pars{t} \over t^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\lim_{N \to \infty}\sum_{k = -N}^{N - 1}\int_{0}^{\pi}{\sin^{2}\pars{t} \over \pars{t + k\pi}^{2}}\,\dd t =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}
\sum_{k = -\infty}^{\infty}{1 \over \pars{t + k\pi}^{2}}\,\dd t
\\[5mm] & =
{1 \over 2}\int_{0}^{\pi}\sin^{2}\pars{t}\csc^{2}\pars{t}\,\dd t =
\bbx{\ds{\pi \over 2}}
\end{align}
<hr>
<strong>$\large 2)$</strong>
\begin{align}
\int_{0}^{\infty}{\sin^{2}\pars{t} \over t^{2}}\,\dd t & =
\int_{0}^{\infty}\sin^{2}\pars{t}\pars{\int_{0}^{\infty}x\expo{-tx}\,\dd x}
\dd t =
\int_{0}^{\infty}x\,\Re\int_{0}^{\infty}
{1 - \expo{2t\,\ic} \over 2}\,\expo{-tx}\dd t\,\dd x
\\[5mm] & =
{1 \over 2}\int_{0}^{\infty}x\,{4/x \over 4 + x^{2}}\,\dd x =
\int_{0}^{\infty}{\dd x \over x^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</p>
|
Russian books on Fourier Series | 2,092,207 | 5 | https://math.stackexchange.com/questions/2092207/russian-books-on-fourier-series | <p>Russians are known on having very strong literature in Mathematics and Sciences overall. So as I am willing to improve my knowledge and problem solving in real-analysis, especially in Fourier series, so I wonder what are the books written and used by the Russians( I have a somewhat good level in Russian)?</p>
| [
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
},
{
"answer_id": 2096753,
"score": 2,
"text": "<p>At the top of my head I would recommend Georgi Tolstov's <a href=\"https://www.ozon.ru/context/detail/id/4472942/\" rel=\"nofollow noreferrer\">\"Fourier Series\"</a> (1960 or 1962 version) which is also available in <a href=\"http://rads.stackoverflow.com/amzn/click/B00BXU509W\" rel=\"nofollow noreferrer\">English</a> but rather easy to find in Russian as well. \nIt provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>\n\n<p>Another choice would be Anatolyi Aksionov's <a href=\"http://www.e-reading.club/book.php?book=133593\" rel=\"nofollow noreferrer\">\"Mathematical Analysis. Fourier Series\"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>\n\n<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these \"conditions\") is Alexander Koldobsky's <a href=\"https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8\" rel=\"nofollow noreferrer\">\"Fourier Analysis In Convex Geometry\"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href=\"https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong\">here</a>).</p>\n"
},
{
"answer_id": 2099622,
"score": 2,
"text": "<p>I hope it can help you :</p>\n\n<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>\n\n<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>\n\n<p><a href=\"http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf\" rel=\"nofollow noreferrer\">original language book</a></p>\n\n<p>author: <a href=\"https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87\" rel=\"nofollow noreferrer\">Фихтенгольц, Григорий Михайлович</a></p>\n"
}
] | Question: Russian books on Fourier Series
Answer 1: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 2: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 3: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 4: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 5: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 6: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 7: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 8: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 9: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 10: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 11: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 12: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 13: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 14: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
Answer 15: <p>At the top of my head I would recommend Georgi Tolstov's <a href="https://www.ozon.ru/context/detail/id/4472942/" rel="nofollow noreferrer">"Fourier Series"</a> (1960 or 1962 version) which is also available in <a href="http://rads.stackoverflow.com/amzn/click/B00BXU509W" rel="nofollow noreferrer">English</a> but rather easy to find in Russian as well.
It provides for a very smooth and elegant introduction to the topic as far as my experience allows me to say. Descriptive and very well balanced. In any case it is considered a classic.</p>
<p>Another choice would be Anatolyi Aksionov's <a href="http://www.e-reading.club/book.php?book=133593" rel="nofollow noreferrer">"Mathematical Analysis. Fourier Series"</a> which is rather short (~90 pages) and part of a series on textbooks on analysis by the same author. I have not read it though. </p>
<p>A book on the topic written by a Russian, but in English (and perhaps the most recent one covering both of these "conditions") is Alexander Koldobsky's <a href="https://www.amazon.co.uk/Mathematical-Monographs-Alexander-Koldobsky-2005-05-01/dp/B01JXMMPT4/ref=asap_bc?ie=UTF8" rel="nofollow noreferrer">"Fourier Analysis In Convex Geometry"</a> which is certainly more specialised and focuses on the application of Fourier analytic methods in solving the Busemann-Petty problem (for more on this check <a href="https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong">here</a>).</p>
Answer 16: <p>I hope it can help you :</p>
<p>Season $6-7$ of volume $3$ of The fundamentals of mathematical analysis by Grigorii Fichtenholz </p>
<p>фихтенгольц - курс дифференциального и интегрального исчисления том 3:</p>
<p><a href="http://ikfia.ysn.ru/images/doc/mat_analiz/Fihtengolc_t3_1966ru.pdf" rel="nofollow noreferrer">original language book</a></p>
<p>author: <a href="https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%85%D1%82%D0%B5%D0%BD%D0%B3%D0%BE%D0%BB%D1%8C%D1%86,_%D0%93%D1%80%D0%B8%D0%B3%D0%BE%D1%80%D0%B8%D0%B9_%D0%9C%D0%B8%D1%85%D0%B0%D0%B9%D0%BB%D0%BE%D0%B2%D0%B8%D1%87" rel="nofollow noreferrer">Фихтенгольц, Григорий Михайлович</a></p>
|
What is the meaning of the last part of the proof of this theorem? | 2,092,682 | 5 | https://math.stackexchange.com/questions/2092682/what-is-the-meaning-of-the-last-part-of-the-proof-of-this-theorem | <p><a href="https://i.stack.imgur.com/G2nd9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G2nd9.png" alt="enter image description here"></a></p>
<p>I am trying to understand the last part of the proof of this theorem but I am really confused as to what the author has done. I have an exam in a couple of days and I don't like memorising things without understanding them but just got lost between the meanings of $y_m, y_{1,m}, y_{2,m}, \beta^{(m)}_{1}$ and $\beta^{(m)}_{2}$.</p>
<p>Thanks.</p>
<p>Any help is appreciated.</p>
| [
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
},
{
"answer_id": 2092699,
"score": 3,
"text": "<p>This is definitely a useful trick to understand.</p>\n\n<p>We start by looking at the sequence $(\\beta_1^{(m)})_{m \\in \\Bbb N}$. It has a convergent subsequence, $(\\beta_1^{k_m})_{m \\in \\Bbb N}$. We will equivalently label this sequence as $(\\beta_1^{k_{1,m}})_{m \\in \\Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \\in \\Bbb N}$ by\n$$\ny_{1,m} = \\beta_1^{(k_{1,m})}x_1 + \\beta_2^{(k_{1,m})}x_2 + \\cdots + \\beta_n^{(k_{1,m})}x_n\n$$\nFrom there, we look at the sequence $(\\beta^{(k_{1,m})}_2)_{m \\in \\Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\\beta^{(k_{2,m})}_2)_{m \\in \\Bbb N}$, which satisfies $\\lim_{m \\to \\infty} \\beta^{(k_{2,m})}_2 \\to \\beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\\beta^{(k_{2,m})}_1 \\to \\beta_1$. The corresponding subsequence of $y_n$ is\n$$\ny_{2,m} = \\beta_1^{(k_{2,m})}x_1 + \\beta_2^{(k_{2,m})}x_2 + \\cdots + \\beta_n^{(k_{2,m})}x_n\n$$\nthe process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>\n\n<hr>\n\n<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \\in \\Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>\n"
},
{
"answer_id": 2092740,
"score": 2,
"text": "<p>The first point is that if $(b_i^{(m)})_{m\\in \\mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\\mathbb N\\to \\mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\\leq 1.$ )</p>\n\n<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\\sum_{i=1}^n|b_i|=\\lim_{m\\to \\infty}\\sum_{i=1}^n|b_i^{(f(m))}|=\\lim_{m\\to \\infty} 1=1.$</p>\n\n<p>The third point is that $y_{f(m)}=\\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\\sum_{i=1}^nb_ix_i$ as $m\\to \\infty$ because $$ \\|y_{f(m)}-y\\|\\leq \\sum_{i=1}^n|b_i^{(f(m))}-b_i|\\cdot \\|x_i\\|$$ and each $b_i^{(f(m)}\\to b_i$ as $m\\to \\infty.$ </p>\n\n<p>The final point is that is that the third point implies $\\|y\\|=\\lim_{m\\to \\infty}\\|y_{f(m)}\\|=0,$ so $0=y=\\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>\n\n<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\\lim_{n\\to \\infty}\\|v_n-v\\|=0$ implies $\\lim_{n\\to \\infty}\\|v_n\\|=\\|v\\|,$ while easily proven, is a good little time-saver.</p>\n\n<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\\mathbb R^n$ (or on $\\mathbb C^n$), for finite $n$, that make $\\mathbb R^n$ (or $\\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>\n"
},
{
"answer_id": 2092898,
"score": 2,
"text": "<p>Another way to look at this: The function\n$$\n F(\\alpha_1,\\alpha_2,\\cdots,\\alpha_n)=\\|\\alpha_1 x_1+\\alpha_2 x_2+\\cdots+\\alpha_n x_n\\|\n$$\nis a jointly continuous function of the scalars $\\alpha_j$. The subset $\\mathscr{C}$ defined by\n$$\n |\\alpha_1|+|\\alpha_2|+\\cdots+|\\alpha_n|=1\n$$\nis a compact subset of $\\mathbb{C}^n$ (or $\\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\\alpha_j$ for which $|\\alpha_1|+\\cdots+|\\alpha_n| > 0$,\n$$\n F(\\frac{\\alpha_1}{|\\alpha_1|+\\cdots+|\\alpha_n|},\\cdots,\\frac{\\alpha_n}{|\\alpha_1|+\\cdots+|\\alpha_n|}) \\ge c > 0\\\\\n \\implies F(\\alpha_1,\\cdots,\\alpha_n) \\ge c(|\\alpha_1|+\\cdots+|\\alpha_n|),\n$$\nwhich proves the inequality.</p>\n"
}
] | Question: What is the meaning of the last part of the proof of this theorem?
Answer 1: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 2: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 3: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 4: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 5: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 6: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 7: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 8: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 9: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 10: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 11: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 12: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 13: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 14: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 15: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 16: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 17: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 18: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 19: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 20: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 21: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
Answer 22: <p>This is definitely a useful trick to understand.</p>
<p>We start by looking at the sequence $(\beta_1^{(m)})_{m \in \Bbb N}$. It has a convergent subsequence, $(\beta_1^{k_m})_{m \in \Bbb N}$. We will equivalently label this sequence as $(\beta_1^{k_{1,m}})_{m \in \Bbb N}$; more on that in a bit. We can define a corresponding subsequence $(y_{1,m})_{m \in \Bbb N}$ by
$$
y_{1,m} = \beta_1^{(k_{1,m})}x_1 + \beta_2^{(k_{1,m})}x_2 + \cdots + \beta_n^{(k_{1,m})}x_n
$$
From there, we look at the sequence $(\beta^{(k_{1,m})}_2)_{m \in \Bbb N}$ (which doesn't necessarily converge). We can select a subsequence (of this subsequence!) that converges. That is, we look at $(\beta^{(k_{2,m})}_2)_{m \in \Bbb N}$, which satisfies $\lim_{m \to \infty} \beta^{(k_{2,m})}_2 \to \beta_2$. Notice that because we've taken a <em>subsequence</em>, we also have $\beta^{(k_{2,m})}_1 \to \beta_1$. The corresponding subsequence of $y_n$ is
$$
y_{2,m} = \beta_1^{(k_{2,m})}x_1 + \beta_2^{(k_{2,m})}x_2 + \cdots + \beta_n^{(k_{2,m})}x_n
$$
the process continues until we reach $y_{n,m}$, for which each sequence of coefficients converges.</p>
<hr>
<p><strong>Note:</strong> If we have the Heine-Borel theorem, then we can reach the convergent subsequence $(y_{n,m})_{m \in \Bbb N}$ in one step. In fact, this trick is how one would prove Heine-Borel using Bolzano-Weierstrass.</p>
Answer 23: <p>The first point is that if $(b_i^{(m)})_{m\in \mathbb N}$ is a bounded sequence for each $i=1,...,n$ then there is a strictly increasing $f:\mathbb N\to \mathbb N$ such that $(b_i^{(f(m))})_m$ converges to a limit $b_i$ for each $i=1,...,n.$ ...(In the Lemma, every $|b_i^{(m)}|\leq 1.$ )</p>
<p>The second point is that, in the Lemma, not all $b1,...,b_n$ can be $0$ because $\sum_{i=1}^n|b_i|=\lim_{m\to \infty}\sum_{i=1}^n|b_i^{(f(m))}|=\lim_{m\to \infty} 1=1.$</p>
<p>The third point is that $y_{f(m)}=\sum_{i=1}^nb_i^{(f(m))}x_i$ converges to $y=\sum_{i=1}^nb_ix_i$ as $m\to \infty$ because $$ \|y_{f(m)}-y\|\leq \sum_{i=1}^n|b_i^{(f(m))}-b_i|\cdot \|x_i\|$$ and each $b_i^{(f(m)}\to b_i$ as $m\to \infty.$ </p>
<p>The final point is that is that the third point implies $\|y\|=\lim_{m\to \infty}\|y_{f(m)}\|=0,$ so $0=y=\sum_{i=1}^nb_ix_i,$ with $b_1,...,b_n$ not all $0$ (second point), contradicting the linear independence of $x_1,...,x_n.$</p>
<p>NOTES. (1). The first point is a useful general tool. (2). The fact that, in a normed space, $\lim_{n\to \infty}\|v_n-v\|=0$ implies $\lim_{n\to \infty}\|v_n\|=\|v\|,$ while easily proven, is a good little time-saver.</p>
<p>Two consequences of this lemma are: (i). A finite-dimensional vector-subspace of a normed linear space is closed. (ii). Any two norms on $\mathbb R^n$ (or on $\mathbb C^n$), for finite $n$, that make $\mathbb R^n$ (or $\mathbb C^n$) a normed linear space, will generate the same topology, and, regarding the metrics defined by the norms, the norms' metrics are uniformly equivalent.</p>
Answer 24: <p>Another way to look at this: The function
$$
F(\alpha_1,\alpha_2,\cdots,\alpha_n)=\|\alpha_1 x_1+\alpha_2 x_2+\cdots+\alpha_n x_n\|
$$
is a jointly continuous function of the scalars $\alpha_j$. The subset $\mathscr{C}$ defined by
$$
|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|=1
$$
is a compact subset of $\mathbb{C}^n$ (or $\mathbb{R}^n$ if you are working over the reals.) And $F$ is never $0$ on this set. Therefore, $F$ achieves a minimum value on $\mathscr{C}$ that is non-zero. Let $c$ be that minimum non-zero value. Then, for any $\alpha_j$ for which $|\alpha_1|+\cdots+|\alpha_n| > 0$,
$$
F(\frac{\alpha_1}{|\alpha_1|+\cdots+|\alpha_n|},\cdots,\frac{\alpha_n}{|\alpha_1|+\cdots+|\alpha_n|}) \ge c > 0\\
\implies F(\alpha_1,\cdots,\alpha_n) \ge c(|\alpha_1|+\cdots+|\alpha_n|),
$$
which proves the inequality.</p>
|
Polynomial factorization into irreducibles over $\mathbb{Q}[x]$ | 2,093,272 | 5 | https://math.stackexchange.com/questions/2093272/polynomial-factorization-into-irreducibles-over-mathbbqx | <p>I need to find irreducible factors of $f(x)=x^4+3x^3+2x^2+1$ in $\mathbb{Q}[x]$ and explicitely prove that these factors are indeed irreducible.</p>
<p>I believe we can't reduce $f(x)$ any further but I have to prove that this is the case.</p>
<p>I have already shown that there can't be any linear factors if there was such a factorization using the rational root theorem. But another possibilty is a factorization into two polynomials of degree $2$. So something like: $f(x)=(ax^2+bx+c)(mx^2+nx+p)$. I tried writing it out but since we're working with rationals I find it difficult to find or rule out such polynomials.</p>
<p>What do I have to do in this case? And more general: is there a way to find such second degree factors more easily in $\mathbb{Q}$?</p>
| [
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
},
{
"answer_id": 2093316,
"score": 9,
"text": "<p>If $x^4+3x^3+2x^2+1$ were reducible in $\\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>\n\n<p>But $x^4+x^3+1$ <a href=\"https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal\">is irreducible</a> over $\\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\\mathbb Z$.</p>\n\n<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\\mathbb Q$.</p>\n"
},
{
"answer_id": 2093285,
"score": 7,
"text": "<p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>\n\n<blockquote>\n <p>Cohn criteria: The polynomial <span class=\"math-container\">$a_nx^n + a_{n-1}x^{n-1} + \\ldots + a_1x + a_0$</span> , where <span class=\"math-container\">$0 \\leq a_i \\leq 9$</span> , is irreducible if the number <span class=\"math-container\">$a_na_{n-1} \\ldots a_0$</span> is prime, in base <span class=\"math-container\">$10$</span>. This is true, for any base greater than <span class=\"math-container\">$2$</span> as well, as long as the <span class=\"math-container\">$a_i$</span> are smaller than the base.</p>\n</blockquote>\n\n<p>Suppose we take base four, then the coefficients give the base 4 number <span class=\"math-container\">$10321$</span> which is <span class=\"math-container\">$1 + (2 \\times 4) + (3 \\times 16) + (1 \\times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>\n\n<p>In general, you have to find the roots through the \"quartic formula\", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>\n"
},
{
"answer_id": 2942032,
"score": 1,
"text": "<p><span class=\"math-container\">$f$</span> has degree <span class=\"math-container\">$4$</span> and assumes prime values for these <span class=\"math-container\">$9 > 2 \\cdot 4$</span> points and so must be irreducible: \n<span class=\"math-container\">$$\n\\begin{array}{rl}\nn & f(n) \\\\\n-8 & 2689 \\\\\n-7 & 1471 \\\\\n-4 & 97 \\\\\n-3 & 19 \\\\\n1 & 7 \\\\\n3 & 181 \\\\\n5 & 1051 \\\\\n6 & 2017 \\\\\n7 & 3529 \\\\\n\\end{array}\n$$</span></p>\n"
}
] | Question: Polynomial factorization into irreducibles over $\mathbb{Q}[x]$
Answer 1: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 2: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 3: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 4: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 5: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 6: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 7: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 8: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 9: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 10: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 11: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 12: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 13: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 14: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 15: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 16: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 17: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 18: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 19: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 20: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 21: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
Answer 22: <p>If $x^4+3x^3+2x^2+1$ were reducible in $\mathbb Z[X]$ then it would also be reducible modulo $2$.</p>
<p>But $x^4+x^3+1$ <a href="https://math.stackexchange.com/questions/32197/find-all-irreducible-monic-polynomials-in-mathbbz-2x-with-degree-equal">is irreducible</a> over $\mathbb Z_2$ (there are only a few possible factors to check), so $x^4+3x^3+2x^2+1$ is irreducible over $\mathbb Z$.</p>
<p>Therefore, due to Gauß's Lemma, $x^4+3x^3+2x^2+1$ is also irreducible over $\mathbb Q$.</p>
Answer 23: <p>.Here's a wonderful result that will tell you about the irreducibility of this polynomial, but it still requires far more work:</p>
<blockquote>
<p>Cohn criteria: The polynomial <span class="math-container">$a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0$</span> , where <span class="math-container">$0 \leq a_i \leq 9$</span> , is irreducible if the number <span class="math-container">$a_na_{n-1} \ldots a_0$</span> is prime, in base <span class="math-container">$10$</span>. This is true, for any base greater than <span class="math-container">$2$</span> as well, as long as the <span class="math-container">$a_i$</span> are smaller than the base.</p>
</blockquote>
<p>Suppose we take base four, then the coefficients give the base 4 number <span class="math-container">$10321$</span> which is <span class="math-container">$1 + (2 \times 4) + (3 \times 16) + (1 \times 4096) = 4153$</span> in decimals, which you can verify is prime easily (it is still quite small, this process took me under a minute to verify). Hence, by the Cohn criteria, this works out to be irreducible. Surely there are better criteria than this one, but somehow I felt inclined to use this one.</p>
<p>In general, you have to find the roots through the "quartic formula", but then this is very hard to apply. Normally when this kind of a question is given to me, I look around to using Eisenstein or Cohn or Perron criteria (you can read these up, there's a huge literature on it). Either way, it will take effort to decode quartic polynomials in general.</p>
Answer 24: <p><span class="math-container">$f$</span> has degree <span class="math-container">$4$</span> and assumes prime values for these <span class="math-container">$9 > 2 \cdot 4$</span> points and so must be irreducible:
<span class="math-container">$$
\begin{array}{rl}
n & f(n) \\
-8 & 2689 \\
-7 & 1471 \\
-4 & 97 \\
-3 & 19 \\
1 & 7 \\
3 & 181 \\
5 & 1051 \\
6 & 2017 \\
7 & 3529 \\
\end{array}
$$</span></p>
|
Can shooting a club collapse cardinals? | 2,093,568 | 5 | https://math.stackexchange.com/questions/2093568/can-shooting-a-club-collapse-cardinals | <p>Let $S$ be a stationary costationary subset of $\omega_1$ and let $\mathbb{P}$ be the usual poset that shoots a club through $S$ (with closed initial segments, ordered by end-extension). Does $\mathbb{P}$ force CH?</p>
<p>Of course, if CH already holds, then it continues to hold in the extension, since $\mathbb{P}$ doesn't add reals. Also, if we do not assume that $S$ is costationary, the answer is yes, since the intersection of the generic club with the club contained in $S$ codes a Cohen subset of $\omega_1$.</p>
<p>I am mostly interested in this question because I would like to know more about what the generic club looks like. It will obviously be a fresh subset of $\omega_1$ (in the sense that all of its initial segments are in the ground model), but I wonder whether it also codes the ground model reals in some way.</p>
<p>I would also welcome comments about the situation at larger cardinals $\kappa$ (with $S$ appropriately fat).</p>
| [
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
},
{
"answer_id": 2093590,
"score": 3,
"text": "<p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>\n\n<p>Let $G$ be such a generic club. For $\\alpha\\in G$ and $n\\in\\omega$, let $\\alpha_n$ be the $n$th element of $G$ after $\\alpha$. Now say that an ordinal $\\beta\\in S$ is an <em>$S$-successor</em> if there is a $\\gamma\\in S$ with $\\gamma<\\beta$ such that $S\\cap (\\gamma,\\beta)=\\emptyset$, and $\\beta\\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\\omega_1$-many $S$-limits and $S$-successors.</p>\n\n<p>Now consider the real $$r(\\alpha)=\\{n: \\alpha_n\\mbox{ is an $S$-successor}\\}.$$ I believe that every ground model real will generically appear as $r(\\alpha)$ for some $\\alpha$; and this gives a surjection in $V[G]$ from $\\omega_1$ to $\\mathbb{R}$.</p>\n\n<hr>\n\n<p>I think a similar trick works for arbitrary $\\kappa$. Define $\\alpha_\\eta$ as above for $\\eta<\\kappa$, with the caveat that if $\\eta$ is a limit then $\\alpha_\\eta$ is the $(\\eta+1)$th element of $G$ after $\\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\\alpha)$ as above, I believe every ground subset of $\\kappa$ is represented as $r(\\alpha)$ for some $\\alpha$, so the generic extension satisfies $2^\\kappa=\\kappa^+$.</p>\n"
},
{
"answer_id": 2093901,
"score": 3,
"text": "<p>After reading Noah's answer, I realized that, in fact, forcing with $\\mathbb{P}$ will add a Cohen subset of $\\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).\n$\\newcommand{\\P}{\\mathbb{P}}$</p>\n\n<p>There is a projection from $\\P$ to $\\operatorname{Add}(\\omega_1,1)$ that works as follows: given a condition $p\\in\\P$ let $p_\\xi$ be its $\\xi$th element. We can interpret $p_{\\xi+1}$ as giving the $\\xi$th bit of a condition in $\\operatorname{Add}(\\omega_1,1)$, based on whether $p_{\\xi+1}$ is an isolated point of $S$ or not. Let $p^*\\in\\operatorname{Add}(\\omega_1,1)$ be the condition derived from $p$ in this way.</p>\n\n<p>The key to seeing that the map $p\\mapsto p^*$ is a projection is the fact that any stationary subset of $\\omega_1$ is fat, meaning that it contains closed copies of any $\\alpha<\\omega_1$. So, given a Cohen condition $q\\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\\max(p)$ that gives us enough space to code the extra information from $q$ into $\\bar{p}\\leq p$ with $\\bar{p}^*\\leq q$.</p>\n\n<p>The same argument should work to show that shooting a club through a fat stationary subset of $\\kappa$ adds a Cohen subset of $\\kappa$. </p>\n"
}
] | Question: Can shooting a club collapse cardinals?
Answer 1: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 2: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 3: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 4: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 5: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 6: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 7: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 8: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 9: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 10: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 11: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 12: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 13: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 14: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
Answer 15: <p>Unless I'm misunderstanding the forcing, I think the answer is yes:</p>
<p>Let $G$ be such a generic club. For $\alpha\in G$ and $n\in\omega$, let $\alpha_n$ be the $n$th element of $G$ after $\alpha$. Now say that an ordinal $\beta\in S$ is an <em>$S$-successor</em> if there is a $\gamma\in S$ with $\gamma<\beta$ such that $S\cap (\gamma,\beta)=\emptyset$, and $\beta\in S$ is an <em>$S$-limit</em> otherwise. Note that $S$ contains $\omega_1$-many $S$-limits and $S$-successors.</p>
<p>Now consider the real $$r(\alpha)=\{n: \alpha_n\mbox{ is an $S$-successor}\}.$$ I believe that every ground model real will generically appear as $r(\alpha)$ for some $\alpha$; and this gives a surjection in $V[G]$ from $\omega_1$ to $\mathbb{R}$.</p>
<hr>
<p>I think a similar trick works for arbitrary $\kappa$. Define $\alpha_\eta$ as above for $\eta<\kappa$, with the caveat that if $\eta$ is a limit then $\alpha_\eta$ is the $(\eta+1)$th element of $G$ after $\alpha$ (since limit elements of $G$ are forced to be $S$-limits). Then, defining $r(\alpha)$ as above, I believe every ground subset of $\kappa$ is represented as $r(\alpha)$ for some $\alpha$, so the generic extension satisfies $2^\kappa=\kappa^+$.</p>
Answer 16: <p>After reading Noah's answer, I realized that, in fact, forcing with $\mathbb{P}$ will add a Cohen subset of $\omega_1$ regardless of whether $S$ is costationary or not. This then also answers the question (and gives other information as well).
$\newcommand{\P}{\mathbb{P}}$</p>
<p>There is a projection from $\P$ to $\operatorname{Add}(\omega_1,1)$ that works as follows: given a condition $p\in\P$ let $p_\xi$ be its $\xi$th element. We can interpret $p_{\xi+1}$ as giving the $\xi$th bit of a condition in $\operatorname{Add}(\omega_1,1)$, based on whether $p_{\xi+1}$ is an isolated point of $S$ or not. Let $p^*\in\operatorname{Add}(\omega_1,1)$ be the condition derived from $p$ in this way.</p>
<p>The key to seeing that the map $p\mapsto p^*$ is a projection is the fact that any stationary subset of $\omega_1$ is fat, meaning that it contains closed copies of any $\alpha<\omega_1$. So, given a Cohen condition $q\leq p^*$, we can find in $S$ a closed copy of a very large countable successor ordinal above $\max(p)$ that gives us enough space to code the extra information from $q$ into $\bar{p}\leq p$ with $\bar{p}^*\leq q$.</p>
<p>The same argument should work to show that shooting a club through a fat stationary subset of $\kappa$ adds a Cohen subset of $\kappa$. </p>
|
Explanation of Euler's identity | 2,093,969 | 5 | https://math.stackexchange.com/questions/2093969/explanation-of-eulers-identity | <p>I am not math student and I don't do math professionally, so sorry for the stupid question (if it is such). I'm interested in Euler's identity: </p>
<p>$$e^{i \pi} + 1 = 0$$
or
$$e^{i \pi} = -1.$$</p>
<p>Is it the same as like $\qquad e^{ix}=\cos(x) + i\sin(x)$ ? </p>
<p>Can you give some physics or real life examples of using it and what's the difference between the two formulas.</p>
<p>I understand the idea behind $e^x$, simply explained it is the amount of continuous growth after a certain amount of time($x$). But why we use $\pi$ or $i$, in which cases and what they represent?</p>
<p>In which cases we multiply $e$ and in which we put it to the power of another number(difference between $e^x$ and $ye^x$)? </p>
<p>Some articles, books or videos will be helpful. I watched some, but they only explain how to prove the equation and how to find the derivative, not the using of it.</p>
| [
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
},
{
"answer_id": 2093980,
"score": 3,
"text": "<p>$\\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\\sqrt{-1}$ is a complex square root of $-1$.</p>\n\n<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>\n\n<p>$$e^x = \\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$</p>\n\n<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>\n\n<p>$$e^{a+bi}=e^a\\cdot(\\cos(b)+i\\sin(b))$$</p>\n\n<p>Without going into full details this follows from the fact that both $\\cos(x)$ and $\\sin(x)$ have a \"nice\" representation as an infinite series as well.</p>\n\n<p>Now putting $a=0$ and $b=\\pi$ brings the famous Euler's formula.</p>\n"
},
{
"answer_id": 2094009,
"score": 2,
"text": "<p>If you'd like a more intuitive explanation, here is an image from <a href=\"https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg\" rel=\"nofollow noreferrer\">Wikipedia</a>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/9xL06.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9xL06.png\" alt=\"enter image description here\"></a></p>\n\n<p>To answer your question about $i$ represents, $i=\\sqrt{-1}$. It is an <a href=\"https://en.wikipedia.org/wiki/Imaginary_number\" rel=\"nofollow noreferrer\">imaginary number</a>.</p>\n\n<p>As you can see from the diagram, the angle $\\varphi$ represents the angle going from the real axis on an <a href=\"http://mathworld.wolfram.com/ArgandDiagram.html\" rel=\"nofollow noreferrer\">Argand Diagram</a> going counter-clockwise.</p>\n\n<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>\n\n<p>$1$ radian is equivalent to $\\frac{180}{\\pi}$ degrees.</p>\n\n<p>Therefore, a full $180^{\\circ}$ degree turn is $\\varphi=\\pi$ radians.</p>\n\n<p>The equation on the image, $e^{i\\varphi}=\\cos{\\varphi}+i\\sin{\\varphi}$ (<a href=\"https://en.wikipedia.org/wiki/Euler's_formula\" rel=\"nofollow noreferrer\">Euler's formula</a>) uses radians for angles of $\\varphi$.</p>\n\n<p>If you rotate the angle $180$ degrees, or $\\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\\pi}=-1$. You can also just simply substitute $\\varphi=\\pi$ into Euler's formula, then you'd get the same identity.</p>\n\n<p>If you have any more questions, feel free to ask.</p>\n"
},
{
"answer_id": 2094110,
"score": 1,
"text": "<p>The formula is given by</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<p>It has many profound real applications. For starters, $i=\\sqrt{-1}$ and $\\pi$ represents half of a circle here. In physics, we have the following application, since $\\cos$ and $\\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\\pi i}=-1$.</p>\n\n<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>\n\n<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>\n\n<p>$$e^x=1+x+\\frac12x^2+\\dots+\\frac1{n!}x^n+\\dots$$</p>\n\n<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>\n\n<p><a href=\"https://i.stack.imgur.com/WLu1U.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WLu1U.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HeNyM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HeNyM.png\" alt=\"enter image description here\"></a></p>\n\n<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>\n\n<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>\n\n<p>A good physical example of why this is so important:</p>\n\n<p><a href=\"http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg\" rel=\"nofollow noreferrer\">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>\n\n<p><a href=\"https://i.stack.imgur.com/XSPSM.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XSPSM.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2094081,
"score": 0,
"text": "<p>I don't like my old answer. New answer:</p>\n\n<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>\n\n<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>$e^{x+iy} = e^x*e^{iy}$.</p>\n\n<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>\n\n<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>\n\n<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>\n\n<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>\n\n<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>\n\n<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \\sin(y)$ and $c(y) = \\cos(y)$.</p>\n\n<p>So.... $e^{x+iy} = e^x(\\cos y + i \\sin y)$.</p>\n\n<p>So .... $e^{\\pi i} = e^0(\\cos \\pi + i \\sin \\pi) = 1(-1 + 0) = -1$.</p>\n\n<p>==== old answer ====</p>\n\n<p>It gets abstract.</p>\n\n<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>\n\n<p>$b^{k+j} = b^kb^j$</p>\n\n<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \\frac 1{b^{|k|}}$ and to the rational numbers via $b^{\\frac mn} = (\\sqrt[n]{b})^m$ even though \"multiplying $b$ by iteslf $-\\frac 53$ times\" doesn't actually make sense.</p>\n\n<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\\frac {de^x}{dx} = e^x$ [because $\\lim \\frac {b^{x + h} - b^x}{h} = \\lim \\frac {b^x(b^h - 1)}{h} = b^x\\lim \\frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\\lim \\frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>\n\n<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an \"imaginary\" number so that $i^2 = - 1$.</p>\n\n<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\\mathbb C \\rightarrow \\mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \\rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \\lim_{h\\rightarrow 0} \\frac {f((x+h) + iy) - f(x+iy)}{h} = \\lim_{j\\rightarrow 0}\\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the \"left\" or negative direction, or whether $h$ approaches from the \"right\" or positive direction. For complex numbers we also have to include the limit coming from the real \"left/right\" or the complex \"up/down\" direction.</p>\n\n<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\\frac {df}{dx} = -i\\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\\frac{du}{dx}=\\frac{dv}{dy}$ and $\\frac {du}{dy} = -\\frac {dv}{dx}$ and $f'(z) = \\frac{du}{dx}+\\frac{dv}{dx}i = \\frac {dv}{dy} - \\frac {du}{dy}i$.</p>\n\n<p>Okay, that was a bit complicated. But the point is...</p>\n\n<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\\frac{du}{dx} = u$ and $\\frac{dv}{dx} = v$ but $\\frac{dv}{dy} = u$ and $\\frac{du}{dy} =-v$.</p>\n\n<p>What possible $u,v$ will satisfy that???</p>\n\n<p>Well, there's only one. $u = e^x*\\cos x$ and $v = e^x*\\sin x$. I'll spare you the gory details.</p>\n\n<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\\cos x + i\\sin x)$.</p>\n\n<p>So... $e^{\\pi i} = \\cos \\pi + i \\sin \\pi = -1$!</p>\n\n<p>....</p>\n\n<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>\n"
}
] | Question: Explanation of Euler's identity
Answer 1: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 2: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 3: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 4: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 5: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 6: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 7: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 8: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 9: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 10: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 11: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 12: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 13: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 14: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 15: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 16: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 17: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 18: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 19: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 20: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 21: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 22: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 23: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 24: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 25: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 26: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 27: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 28: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
Answer 29: <p>$\pi$ is the ratio of a circle's circumference to its diameter (which is a constant) and $i=\sqrt{-1}$ is a complex square root of $-1$.</p>
<p>First of all you have to understand what $e^x$ really is. There are several equivalent definitions but the one that is really important is the following:</p>
<p>$$e^x = \sum_{n=0}^{\infty}\frac{x^n}{n!}$$</p>
<p>It is the beauty of maths that it coincides with the classical powering operation. And it is important because this definition can be easily extended to complex numbers. Now over complex numbers it has the following property:</p>
<p>$$e^{a+bi}=e^a\cdot(\cos(b)+i\sin(b))$$</p>
<p>Without going into full details this follows from the fact that both $\cos(x)$ and $\sin(x)$ have a "nice" representation as an infinite series as well.</p>
<p>Now putting $a=0$ and $b=\pi$ brings the famous Euler's formula.</p>
Answer 30: <p>If you'd like a more intuitive explanation, here is an image from <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Euler%27s_formula.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.stack.imgur.com/9xL06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xL06.png" alt="enter image description here"></a></p>
<p>To answer your question about $i$ represents, $i=\sqrt{-1}$. It is an <a href="https://en.wikipedia.org/wiki/Imaginary_number" rel="nofollow noreferrer">imaginary number</a>.</p>
<p>As you can see from the diagram, the angle $\varphi$ represents the angle going from the real axis on an <a href="http://mathworld.wolfram.com/ArgandDiagram.html" rel="nofollow noreferrer">Argand Diagram</a> going counter-clockwise.</p>
<p>Basically, in mathematics you have several systems for angles. There is the conventional degrees ($^\circ$), and radians ($^r$). (There are others, however they are not useful at the moment).</p>
<p>$1$ radian is equivalent to $\frac{180}{\pi}$ degrees.</p>
<p>Therefore, a full $180^{\circ}$ degree turn is $\varphi=\pi$ radians.</p>
<p>The equation on the image, $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ (<a href="https://en.wikipedia.org/wiki/Euler's_formula" rel="nofollow noreferrer">Euler's formula</a>) uses radians for angles of $\varphi$.</p>
<p>If you rotate the angle $180$ degrees, or $\pi$ radians, you will notice that the arrow will point towards the left hand side of the unit circle, towards the point $(-1,0i)$. This is why $e^{i\pi}=-1$. You can also just simply substitute $\varphi=\pi$ into Euler's formula, then you'd get the same identity.</p>
<p>If you have any more questions, feel free to ask.</p>
Answer 31: <p>The formula is given by</p>
<p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
<p>It has many profound real applications. For starters, $i=\sqrt{-1}$ and $\pi$ represents half of a circle here. In physics, we have the following application, since $\cos$ and $\sin$ are sinusoidal functions. That is, they look like waves, which occurs in a lot of physics problems. Particularly, half a circle represents $-1$, so $e^{\pi i}=-1$.</p>
<p>Another neat nature about this is that the reals and imaginaries are orthogonal, so we can replace $(x,y)$ coordinate systems with $e^{ix}$ type stuff. The most basic example that comes to my mind is electro-magnetism, where the electro is real and the magnetism is imaginary, or vice versa.</p>
<p>Now, you might be wondering why this should even be allowed. As far as you know, $e^x$ means something is growing exponentially, but suddenly it has to do with waves and circles. What gives? Well, the following formula gives:</p>
<p>$$e^x=1+x+\frac12x^2+\dots+\frac1{n!}x^n+\dots$$</p>
<p>Not only does this equation above follow the basic properties you'd expect exponential functions to follow, but it also allows for an extension to complex numbers quite nicely.</p>
<p><a href="https://i.stack.imgur.com/WLu1U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLu1U.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HeNyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeNyM.png" alt="enter image description here"></a></p>
<p>And so this is the exponential function cut in half. The first is the real part and the second is the imaginary.</p>
<p>Notice how as they go right, they start to behave more and more exponentially. But at the same time, in the other direction, they behave like a wave, going up and down.</p>
<p>A good physical example of why this is so important:</p>
<p><a href="http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg" rel="nofollow noreferrer">http://ap-physics.david-s.org/wp-content/uploads/2012/08/Electromagnetism.jpg</a></p>
<p><a href="https://i.stack.imgur.com/XSPSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSPSM.png" alt="enter image description here"></a></p>
Answer 32: <p>I don't like my old answer. New answer:</p>
<p>What does $e^{x + iy}$ (where $i^2 = -1$) <em>mean</em>?</p>
<p>Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>$e^{x+iy} = e^x*e^{iy}$.</p>
<p>So what does $e^{iy}$ mean. Well, if it is to behave in the complex numbers the same way it does in the real numbers we know:</p>
<p>if $f(y) = e^{yi}$ then $f'(y) = ie^{yi}$ and $f''(y) = i^2e^{yi} = - e^{yi}$ and $f'''(y) = -ie^{yi}$ and $f''''(y) = e^{yi}$.</p>
<p>So if $f(y) =e^{yi} = c(y) + i*s(y)$ for some functions $c(y), s(y)$ then</p>
<p>$f'(y) = c'(y) + i*s'(y) = -s(y) + i*c(y) = i*f(y)$</p>
<p>$f''(y) = -s'(y) + i*c'(y) = -c(y) - i*s(y)=-f(y)$. </p>
<p>What two sets of of functions are so that $c'(y) = -s(y)$ and $s'(y)=c(y)$? And further more we need $1=e^{0i} = c(0) + i*s(0)$ so $c(0) = 1$ and $s(0) = 0$ Well... the only such functions are $s(y) = \sin(y)$ and $c(y) = \cos(y)$.</p>
<p>So.... $e^{x+iy} = e^x(\cos y + i \sin y)$.</p>
<p>So .... $e^{\pi i} = e^0(\cos \pi + i \sin \pi) = 1(-1 + 0) = -1$.</p>
<p>==== old answer ====</p>
<p>It gets abstract.</p>
<p>If we define $b^k = b*b*.....*b$ for positive integer $k$ and $b > 0$ we notice the fundamental:</p>
<p>$b^{k+j} = b^kb^j$</p>
<p>For that we can extend $k$ to zero and negative integers so that $b^0 = 0$ and $b^{-|k|} = \frac 1{b^{|k|}}$ and to the rational numbers via $b^{\frac mn} = (\sqrt[n]{b})^m$ even though "multiplying $b$ by iteslf $-\frac 53$ times" doesn't actually make sense.</p>
<p>When we learn calculus we discover if we extend this to real numbers by taking limits, we get $\frac {de^x}{dx} = e^x$ [because $\lim \frac {b^{x + h} - b^x}{h} = \lim \frac {b^x(b^h - 1)}{h} = b^x\lim \frac{b^h -1}h$ and $e$ is, one way or another, defined-- or proven to be-- the positive real value where $\lim \frac{e^h - 1}{h}= 1$] and we use this to create to recreate the definition of continuous growth.</p>
<p>So all is fine until we get to trying to do Complex Analysis, where we have complex numbers $z = x + xi$ where $x$ and $y$ are real numbers but $i$ is an "imaginary" number so that $i^2 = - 1$.</p>
<p>Slight Diversion. The most fundamental aspect of Complex Analysis and doing calculus and finding derivatives of continuous complex functions. In a sense a complex function $f:\mathbb C \rightarrow \mathbb C$ can be thought of as a multivariate function because $f(z) = w= u+vi$ is a function on $z$ which has a 2-dimensional aspect; $(x, y) \rightarrow (u,v)$. If our function is to be differentiable in complex sense we need $f'(z) = \lim_{h\rightarrow 0} \frac {f((x+h) + iy) - f(x+iy)}{h} = \lim_{j\rightarrow 0}\frac {f(x + i(y+j)) - f(x + iy)}{ij}$. In simply, but lazy terms, to be differentiable the derivative is the same from whatever direction the limit is taken. In real terms that is just whether $h$ approaches from the "left" or negative direction, or whether $h$ approaches from the "right" or positive direction. For complex numbers we also have to include the limit coming from the real "left/right" or the complex "up/down" direction.</p>
<p>But if we use the fact that $i^2 = -1$ we get through a lot of complicated procedures that ingetrating with respect to $x$ is to $-i$ times integrating with respect to $y$. Or $\frac {df}{dx} = -i\frac{df}{dy}$ and furthermore that if $f(x+iy) = u + iv$ then $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac {du}{dy} = -\frac {dv}{dx}$ and $f'(z) = \frac{du}{dx}+\frac{dv}{dx}i = \frac {dv}{dy} - \frac {du}{dy}i$.</p>
<p>Okay, that was a bit complicated. But the point is...</p>
<p>What can $e^{x + yi}$ possibly <em>mean</em>? Well, we don't know. <em>But</em> whatever it means we need to have $e^{x + yi} = u + vi$ so that $(e^{x+yi})' = e^{x+yi}$ and if we have the condition above $\frac{du}{dx} = u$ and $\frac{dv}{dx} = v$ but $\frac{dv}{dy} = u$ and $\frac{du}{dy} =-v$.</p>
<p>What possible $u,v$ will satisfy that???</p>
<p>Well, there's only one. $u = e^x*\cos x$ and $v = e^x*\sin x$. I'll spare you the gory details.</p>
<p>So.... Ta-dah: If $e^{x+iy}$ is to make any sense and to follow the laws of calculus it must be $e^{x+iy} = e^xe^{iy} = e^x(\cos x + i\sin x)$.</p>
<p>So... $e^{\pi i} = \cos \pi + i \sin \pi = -1$!</p>
<p>....</p>
<p>Which isn't as weird as it seems. If you have $(a+bi)(c+di) = ac + (bc+ad)i + bdi^2 = (ac -bd) + (bc+ad)i$ and graph it on an $x,y$ plane, you notice each complex number has a distance for $0$ and an angle to the x-axis. If you multiply the distances are multiplied and the angles are added! So multiplication of complex numbers is a combination of multiplying absolute values are roatating. So it isn't surprising (well, actually it's jaw-dropping astonishing) that exponentiation would involve interconnected trig functions.</p>
|
Conditions for l'Hôpital's rule | 2,094,001 | 5 | https://math.stackexchange.com/questions/2094001/conditions-for-lh%c3%b4pitals-rule | <p>According to l'Hôpital's rule, given functions $f,g$ which are differentiable around $a\in \mathbb R$, such that --</p>
<ol>
<li>$\lim_{x\to a} f(x)=\lim_{x\to a}g(x)=0$</li>
<li>$g'(x)\neq 0$ on some deleted neighborhood of $a$.</li>
<li>$\lim_{x\to a} {\frac {f'(x)}{g'(x)} }$ exists (widely).</li>
</ol>
<p>Then $\ \ \lim_{x\to a} {\frac {f(x)}{g(x)} } = \lim_{x\to a} {\frac {f'(x)}{g'(x)} }$.</p>
<p>Condition 2 is necessary for the proof, but I can't find a counterexample for the theorem without it. Could you give an example of differentiable functions $f,g$ aroud $a$, such that conditions 1,3 hold, but </p>
<p>$\ \ \lim_{x\to a} {\frac {f(x)}{g(x)} } \neq \lim_{x\to a} {\frac {f'(x)}{g'(x)} }$?</p>
| [
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
},
{
"answer_id": 2094055,
"score": 4,
"text": "<p>Condition (2) is present not simply to ensure existence of $\\lim f'(x)/g'(x).$ </p>\n\n<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\\lim f'(x)/g'(x)$ exists but $\\lim f(x)/g(x)$ fails to exist.</p>\n\n<p>Stolz produced such a counterexample for the case where $f(x), g(x) \\to \\infty$ as $x \\to a = \\infty$. Specifically,</p>\n\n<p>$$f(x) = \\int_0^x \\cos^2(t) \\, dt \\\\ g(x) = f(x)\\exp ( \\sin x).$$</p>\n"
},
{
"answer_id": 2094034,
"score": 1,
"text": "<p>If condition 2 wasn't satisfied, then $\\lim_{x\\to a}\\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\\epsilon, a+\\epsilon)$ for $\\epsilon \\in \\mathbb R$ such that $\\exists y \\in (a-\\epsilon, a+\\epsilon)$ : $g'(y)=0$.</p>\n"
}
] | Question: Conditions for l'Hôpital's rule
Answer 1: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 2: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 3: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 4: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 5: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 6: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 7: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 8: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 9: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 10: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 11: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 12: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 13: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 14: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
Answer 15: <p>Condition (2) is present not simply to ensure existence of $\lim f'(x)/g'(x).$ </p>
<p>One can construct counterexamples where condition (2) is violated but $f'(x)$ and $g'(x)$ share a common factor -- such that $\lim f'(x)/g'(x)$ exists but $\lim f(x)/g(x)$ fails to exist.</p>
<p>Stolz produced such a counterexample for the case where $f(x), g(x) \to \infty$ as $x \to a = \infty$. Specifically,</p>
<p>$$f(x) = \int_0^x \cos^2(t) \, dt \\ g(x) = f(x)\exp ( \sin x).$$</p>
Answer 16: <p>If condition 2 wasn't satisfied, then $\lim_{x\to a}\frac {f'(x)}{g'(x)}$ couldn't be well-defined in an interval $(a-\epsilon, a+\epsilon)$ for $\epsilon \in \mathbb R$ such that $\exists y \in (a-\epsilon, a+\epsilon)$ : $g'(y)=0$.</p>
|
Is this induction question wrong or am I going insane? | 2,094,051 | 5 | https://math.stackexchange.com/questions/2094051/is-this-induction-question-wrong-or-am-i-going-insane | <p>Here's a question that I've come across: Prove by induction that for every integer $n$ greater than or equal to $1$,</p>
<p>$${\sum_{i=1}^{2^n}} \frac{1}{i} \ge 1 +\frac{n}{2}.$$</p>
<p>Now I know how to prove by induction, but wouldn't this fail $p(1)$ since</p>
<p>$$\frac{1}{1} \ge 1 +\frac{1}{2}$$ is not true?</p>
| [
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
},
{
"answer_id": 2094053,
"score": 13,
"text": "<p>Check the upper bound of the summation;</p>\n\n<p>If $n = 1$, then we have</p>\n\n<p>$$\\sum_{i=1}^2 \\frac1i = 1 + \\frac12 \\geq 1 + \\frac12$$</p>\n\n<p>which is definitely true.</p>\n\n<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>\n\n<p><strong>(PROOF BY INDUCTION:)</strong></p>\n\n<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>\n\n<p>The summation becomes</p>\n\n<p>$$\\sum_{i=1}^{2^{k+1}} \\frac1i = \\sum_{i=1}^{2^k} \\frac1i + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq 1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1i$$</p>\n\n<p>which holds because of the induction step. We want to check if</p>\n\n<p>$$1 + \\frac{k}{2} + \\sum_{i=2^k+1}^{2^{k+1}} \\frac1k \\geq 1 + \\frac{k+1}{2} = 1 + \\frac{k}{2} + \\frac12$$</p>\n\n<p>it suffices to show that</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\frac12$$</p>\n\n<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\\frac1{2^{k+1}}$:</p>\n\n<p>$$\\sum_{i=2^k+1}^{2^{k+1}} \\frac1i \\geq \\sum_{i=2^k+1}^{2^{k+1}} \\frac1{2^{k+1}} = 2^k\\cdot \\frac1{2^{k+1}} = \\frac12$$</p>\n\n<p>which concludes the proof.</p>\n"
},
{
"answer_id": 2094054,
"score": 8,
"text": "<p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>\n"
}
] | Question: Is this induction question wrong or am I going insane?
Answer 1: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 2: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 3: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 4: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 5: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 6: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 7: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 8: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 9: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 10: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 11: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 12: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 13: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 14: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
Answer 15: <p>Check the upper bound of the summation;</p>
<p>If $n = 1$, then we have</p>
<p>$$\sum_{i=1}^2 \frac1i = 1 + \frac12 \geq 1 + \frac12$$</p>
<p>which is definitely true.</p>
<p>Given the discussion in the comments, I expand my answer to include the induction proof as I would do it:</p>
<p><strong>(PROOF BY INDUCTION:)</strong></p>
<p>We already have the base case for $n = 1$; Assume it is true up to $k$. We show it is true for $k + 1$.</p>
<p>The summation becomes</p>
<p>$$\sum_{i=1}^{2^{k+1}} \frac1i = \sum_{i=1}^{2^k} \frac1i + \sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq 1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1i$$</p>
<p>which holds because of the induction step. We want to check if</p>
<p>$$1 + \frac{k}{2} + \sum_{i=2^k+1}^{2^{k+1}} \frac1k \geq 1 + \frac{k+1}{2} = 1 + \frac{k}{2} + \frac12$$</p>
<p>it suffices to show that</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \frac12$$</p>
<p>which is, indeed, true. Notice that every term on that series is greater than, or equal to, $\frac1{2^{k+1}}$:</p>
<p>$$\sum_{i=2^k+1}^{2^{k+1}} \frac1i \geq \sum_{i=2^k+1}^{2^{k+1}} \frac1{2^{k+1}} = 2^k\cdot \frac1{2^{k+1}} = \frac12$$</p>
<p>which concludes the proof.</p>
Answer 16: <p>Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)</p>
|
How can I prove $\log_2{(1+x)} \geq {x}$ for $0<x<1$ | 2,094,101 | 5 | https://math.stackexchange.com/questions/2094101/how-can-i-prove-log-21x-geq-x-for-0x1 | <blockquote>
<p>How can I prove the following inequality for $0<x<1$
$$\log_2{(1+x)} \geq {x}$$</p>
</blockquote>
<p>I tried to use $\ln(1+x) \geq x-\frac{x^2}{2}$ for $x\geq 0$ and convert the $\ln$ to $\log_2$ to prove that, but it does not work. Any idea?
Is there a generalized form? </p>
<p>I need an analytical proof, not sketch , or looking at graph. Thanks.</p>
| [
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
},
{
"answer_id": 2094118,
"score": 4,
"text": "<p>The logarithm function is concave. </p>\n\n<p>Thus, it lies above any secant line. </p>\n\n<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\\log_2(1+x)$. </p>\n\n<p>Hence, $\\log_2(1+x)\\ge x$ on $[0,1]$.</p>\n"
},
{
"answer_id": 2094105,
"score": 3,
"text": "<p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>\n"
},
{
"answer_id": 2094108,
"score": 1,
"text": "<p>Consider the first derivate :$(\\frac{\\ln(1+x)}{\\ln2})' = \\frac{1}{\\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>\n"
},
{
"answer_id": 2094115,
"score": 1,
"text": "<p>As Barry said, convert to the equivalent form $1+x \\geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>\n"
},
{
"answer_id": 2094113,
"score": 0,
"text": "<p>The key is, the derivative of the function $f(x)=\\log_{2}(1+x)-x$ is $g(x)=\\log_2 e\\cdot\\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>\n"
}
] | Question: How can I prove $\log_2{(1+x)} \geq {x}$ for $0<x<1$
Answer 1: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 2: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 3: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 4: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 5: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 6: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 7: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 8: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 9: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 10: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 11: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 12: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 13: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 14: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 15: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 16: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 17: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 18: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 19: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 20: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 21: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 22: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 23: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 24: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 25: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 26: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 27: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 28: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 29: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 30: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 31: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 32: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 33: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 34: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 35: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
Answer 36: <p>The logarithm function is concave. </p>
<p>Thus, it lies above any secant line. </p>
<p>In particular, on $[0,1]$, the function $x$ is a secant line for $\log_2(1+x)$. </p>
<p>Hence, $\log_2(1+x)\ge x$ on $[0,1]$.</p>
Answer 37: <p><strong>Hint</strong>: Exponentiate the inequality to the equivalent form $1+x\ge2^x$, and sketch the two graphs $y=1+x$ and $y=2^x$. Note that $(x,y)=(0,1)$ and $(1,2)$ are common to both graphs.</p>
Answer 38: <p>Consider the first derivate :$(\frac{\ln(1+x)}{\ln2})' = \frac{1}{\ln2(1+x)} > 0$ of $0<x<1$, so our function is increase. $x$ also incease, so. We should just check what will be on one point. For example $x = 3$ second function is bigger, so on $0 < x < 1$ the first one will be bigger than second (because of they intersects in $x = 1$).</p>
Answer 39: <p>As Barry said, convert to the equivalent form $1+x \geq 2^x$. Then, consider the function $f(x)=1+x-2^x$. Taking two derivatives you get that $f''(x)=-2^x (\ln(2))^2$ which is always negative. Hence $f$ is concave. Since $f(0)=0=f(1)$, it follows that $f$ must be positive on $(0,1)$, which is what you wanted.</p>
Answer 40: <p>The key is, the derivative of the function $f(x)=\log_{2}(1+x)-x$ is $g(x)=\log_2 e\cdot\frac{1}{1+x} - 1$. The function $g(x)$ is decreasing for $x\in [0, 1]$, and the value is from positive to negative. So the minimum of $f(x)$ is at $0$ and $1$. </p>
|
book recommendation for algebraic geometry | 2,094,187 | 5 | https://math.stackexchange.com/questions/2094187/book-recommendation-for-algebraic-geometry | <p>I have just finished my undergraduate studies and i want to study some algebraic geometry. i have taken 1 year analysis, topology, algebra courses and finished 1 semester complex analysis, homological algebra, fourier analysis, differential geometry. </p>
<p>with this background i'm planning to study atiyah's commutative algebra. then next, where should be the starting point for studying the A.G? hartshorne is surely too tough for me. i want to read some A.G books which are not too hard to follow, and kind of self contained. </p>
| [
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
},
{
"answer_id": 2094648,
"score": 4,
"text": "<p>(My answer ended up being too long for a comment.)</p>\n\n<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>\n\n<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace \"holomorphic\" and \"meromorphic\" with \"regular\" and \"rational.\" (In fact, Dr. Miranda notes this in his \"Algebraic Sheaves\" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>\n\n<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>\n\n<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P\nAnd for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>\n\n<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>\n\n<p>Other popular algebraic geometry books include Milne's Notes (just google the words \"Milne Algebraic Geometry\"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>\n\n<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>\n\n<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>\nDo problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>\n\n<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>\n"
},
{
"answer_id": 3234127,
"score": 4,
"text": "<p>I recommend the books \"Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal\" and </p>\n\n<p>\"Computational Commutative Algebra and Algebraic\nGeometry Course and exercises with solutions. Author: I. Yengui\".</p>\n"
}
] | Question: book recommendation for algebraic geometry
Answer 1: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 2: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 3: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 4: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 5: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 6: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 7: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 8: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 9: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 10: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 11: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 12: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 13: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 14: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
Answer 15: <p>(My answer ended up being too long for a comment.)</p>
<p>Hi there. Just finished up my first semester as a graduate student focusing heavily on geometry and topology at Colorado State University. Didn't know any algebraic geometry until this past spring and have...grown since then haha.</p>
<p>Given your background, you would probably appreciate Rick Miranda's <em>Algebraic Curves and Riemann Surfaces</em>. It is almost entirely done in the context of complex analysis, but all the things you want to be true in the algebraic geometry context are true if you just replace "holomorphic" and "meromorphic" with "regular" and "rational." (In fact, Dr. Miranda notes this in his "Algebraic Sheaves" section toward the end.) You get a great introduction to curves, divisors, sheaves, line bundles, etc., all with examples using the Riemann Sphere (Complex projective line), tori, and a general complex projective (or affine) curve. You also <em>might</em> be able to find this book as a PDF if you know how to Google well. And you don't need all the deep commutative algebra to understand this book, because it leaves everything mostly complex analytic (in a fun way) and really just relies on a little differential topology in some spots that you might not have. If you do choose that book, feel free to get ahold of me if you want some discussion on the concepts or a bit of help with some of the exercises. I'm just finishing up my first readthrough of that one over the winter break and have been doing the problem in that book for a couple years now. (I think my contact info should be on my Google Drive. If not, you can ask me for it on here or probably find it under the CSU graduate students list.)</p>
<p>After a final review of the rest of my recommendations, I really would put Dr. Miranda's book at the top for you. That book makes you feel like you're doing complex analysis with some differential topology but sneaks you into algebraic geometry until you realize that's what you were doing the whole time. And, in fact, there is a paper called Algebraic Geometry and Analytic Geometry--GAGA because French--which states that basically these two are one in the same; complex analytic geometry and complex algebraic geometry really aren't different. (Cf. Chow's Theorem and Dr. Miranda mentions this paper as well.) I think this book would be ideal for your background.</p>
<p>It's a good idea to get some of the intuition in Algebraic Geometry <em>before</em> trying to get all the pure commutative algebra down. Otherwise, you're just doing pure algebra...which, like...why bother :P
And for that purpose I would also recommend reading some of Dieudonne's <em>The Historical Development of Algebraic Geometry.</em> Get a feel for what problems people were trying to solve when developing all this theory and an (even better) appreciation for why Riemann was such a badass. Diuedonne, if you don't know, was part of the Bourbaki group and was (if I remember right) the scribe of the group for some time. He was around when all the exciting developments in alg. geo. were happening and knew enough of history to chronicle those developments in context.</p>
<p>If, however, you are wanting to do some algebra before anything else, check out Dummit and Foote's Algebraic Geometry part/chapter/section (whatever they call it) which goes through the Hilbert Basis Theorem and other very basic results which we use(d) in a classical algebraic geometry intro. class. You also learn enough about localization (especially the correspondence between prime ideals in a localization and prime ideals in your original ring).</p>
<p>Other popular algebraic geometry books include Milne's Notes (just google the words "Milne Algebraic Geometry"--freely available online by the author), obviously Hartshorne (though I get that that's a bit tough to read), and maybe some Harris and/or Eisenbud stuff (e.g. Geometry of Schemes which you might be able to find online). You might be able to find Shafarevich's <em>Basic Algebraic Geometry</em> online, too, and that one has plenty of examples. You <em>really</em> want a lot of examples when studying algebraic geometry.</p>
<p>A book with some <em>heavy</em> category theory and basically all the algebraic geometry in the world, you could look at Vakil's <em>The Rising Sea.</em> He publishes these (I hesitate to call them) notes online every year or so after edits, and this book is just a massive outpouring of alg. geo. with a 100-page grounding in category theory (and some commutative algebra in category theoretic terms) as the foundation for sheaves and schemes. Along the way, Ravi Vakil gives some great pictures of what we are trying to describe with Spec(-) under the Zariski topology and the affine scheme. I honestly have only gotten through maybe 150 pages of that book after about 4 or 5 months of time with it, because of its density and thoroughness. There are examples throughout, and you should really do all the examples you can. The only way I've found to get through alg. geo. is by examples--otherwise you're lost in a sea of abstract meaninglessness.</p>
<p>One note that cannot be stressed enough: <em>DO PROBLEMS</em>
Do problems while you're awake. Do problems while you're half-awake. Do problems on the train. Do problems on the bus. Do problems in your sleep. Do problems at the gym. Do problems literally all of the time every time all day every day you possibly can. And if you're having trouble with some problems, ask the community! They've been great while I learn algebraic geometry, and I'll be around to post when I can as well! :)</p>
<p>Another book that some of my peers found useful while trudging through our first alg. geo. class this fall is Karen Smith's <em>An Invitation to Algebraic Geometry.</em></p>
Answer 16: <p>I recommend the books "Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Authors: Cox, David A, Little, John, Oshea, Donal" and </p>
<p>"Computational Commutative Algebra and Algebraic
Geometry Course and exercises with solutions. Author: I. Yengui".</p>
|
Compute the integral $\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x $ | 2,094,617 | 5 | https://math.stackexchange.com/questions/2094617/compute-the-integral-int-0-pi-2-sin1-cos2x-mathrmdx | <p>My attempt.
$$\int\limits_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x $$
Let $t=1+\cos^2x \Rightarrow \mathrm{d}t -2\sin x\cos x~ \mathrm{d}x. $
We have $\cos^2x=t-1 \Rightarrow \cos x=\sqrt{t-1}$ and $\sin x = \sqrt{2-t}.$ So $$\mathrm{d}x = \frac{-\mathrm{d}t}{2\sqrt{2-t}\sqrt{t-1}}$$
Now,
$$\frac{1}{2}\int\limits_1^2\frac{\sin t}{\sqrt{2-t}\sqrt{t-1}}\mathrm{d}t$$
I tried substituting by parts letting $u=\sin t$, and $dv = \int \mathrm{d}t/(\sqrt{(2-t)(t-1)})$. </p>
<p>But the integral just gets more complicated.</p>
| [
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2094645,
"score": 10,
"text": "<p>Use the <a href=\"https://math.stackexchange.com/questions/1196401\">integral representations</a> of the <a href=\"http://en.wikipedia.org/wiki/Bessel_function\" rel=\"nofollow noreferrer\">Bessel function</a> of the first kind at $0$, </p>\n\n<p>$$\\int_{0}^{\\pi/2}\\cos\\left(z\\cos \\theta\\right)\\,\\mathrm{d}\\theta=\\frac{\\pi}{2}J_0\\left(z\\right).$$ </p>\n\n<p>So, </p>\n\n<p>$$\\begin{align*}\n\\int_0^{\\frac{\\pi}{2}}\\sin\\left(1+\\cos^2x\\right)\\mathrm{d}x&=\\int_0^{\\frac{\\pi}{2}}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)+\\frac{3}{2}\\right)\\mathrm{d}x\\\\\n&=\\sin\\left(\\frac{3}{2}\\right)\\int_{0}^{\\frac{\\pi}{2}}\\cos\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x+\\cos\\left(\\frac{3}{2}\\right)\\underset{0}{\\underbrace{\\int^{\\frac{\\pi}{2}}_{0}\\sin\\left(\\frac{1}{2}\\cos\\left(2x\\right)\\right)\\mathrm{d}x}}\\\\\n&=\\frac{\\pi}{2}\\sin\\left(\\frac{3}{2}\\right)J_0\\left(\\frac{1}{2}\\right)\n\\end{align*}$$</p>\n"
},
{
"answer_id": 2094647,
"score": 4,
"text": "<p>$$\nI=\\Im\\int_{0}^{\\pi/2}e^{i(1+\\cos^2(x))}=\\Im \\left(e^{i\\frac 32}\\int_{0}^{\\pi/2}e^{\\frac i2\\cos(2x)}\\right)=\\frac{1}{4}\\Im\\left( e^{i\\frac 32}\\int_{-\\pi}^{\\pi}e^{\\frac i2\\cos(x)}\\right)\\underbrace{=}_{(\\star)}\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\Im (e^{i\\frac 32})=\\frac{\\pi}{2}J_0\\left(\\frac{1}{2}\\right)\\sin\\left(\\frac{3}{2}\\right)\n$$</p>\n\n<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\\star)$ frollows <a href=\"https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals\" rel=\"nofollow noreferrer\">from here</a></p>\n"
},
{
"answer_id": 2094759,
"score": 0,
"text": "<p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>\n\n<p>\\begin{align}\n\\int_0^{\\pi/2}\\cos^{2n}(2x)dx&=\\int_0^{\\pi/2}\\cos^{2n}(x)dx\\\\\n&=\\frac12\\text{Beta}(\\frac12,n+\\frac12)\n\\end{align}\nhence \\begin{align}\n\\int_0^{\\pi/2}\\sin(1+\\cos^2x)\\mathrm{d}x &=\\int_0^{\\pi/2}\\sum_{m=0}^{\\infty}\\frac{(-1)^m(1+\\cos^2x)^{2m+1}}{(2m+1)!}\\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2}(1+\\cos^2x)^{2m+1}\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^m}{(2m+1)!}\\int_0^{\\pi/2} \\sum_{j=0}^{2m+1}{2k+1\\choose j} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\int_0^{\\pi/2} \\cos^{2j}(x) \\mathrm{d}x\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\text{Beta}(\\frac12,j+\\frac12)\\\\\n&=\\sum_{m=0}^{\\infty}\\frac{(-1)^k}{(2m+1)!}\\sum_{j=0}^{2m+1}{2m+1\\choose j}\\frac12\\pi\\frac{(2j-1)!!}{2^jj!}\\\\\n&=\\frac12\\pi\\sum_{m=0}^{\\infty}\\sum_{j=0}^{2m+1}\\frac{(-1)^k}{(2m+1-j)!}\\frac{(2j-1)!!}{2^jj!^2}\\\\\n\\end{align}</p>\n"
}
] | Question: Compute the integral $\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x $
Answer 1: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 2: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 3: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 4: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 5: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 6: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 7: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 8: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 9: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 10: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 11: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 12: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 13: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 14: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 15: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 16: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 17: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 18: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 19: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 20: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 21: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
Answer 22: <p>Use the <a href="https://math.stackexchange.com/questions/1196401">integral representations</a> of the <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel function</a> of the first kind at $0$, </p>
<p>$$\int_{0}^{\pi/2}\cos\left(z\cos \theta\right)\,\mathrm{d}\theta=\frac{\pi}{2}J_0\left(z\right).$$ </p>
<p>So, </p>
<p>$$\begin{align*}
\int_0^{\frac{\pi}{2}}\sin\left(1+\cos^2x\right)\mathrm{d}x&=\int_0^{\frac{\pi}{2}}\sin\left(\frac{1}{2}\cos\left(2x\right)+\frac{3}{2}\right)\mathrm{d}x\\
&=\sin\left(\frac{3}{2}\right)\int_{0}^{\frac{\pi}{2}}\cos\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x+\cos\left(\frac{3}{2}\right)\underset{0}{\underbrace{\int^{\frac{\pi}{2}}_{0}\sin\left(\frac{1}{2}\cos\left(2x\right)\right)\mathrm{d}x}}\\
&=\frac{\pi}{2}\sin\left(\frac{3}{2}\right)J_0\left(\frac{1}{2}\right)
\end{align*}$$</p>
Answer 23: <p>$$
I=\Im\int_{0}^{\pi/2}e^{i(1+\cos^2(x))}=\Im \left(e^{i\frac 32}\int_{0}^{\pi/2}e^{\frac i2\cos(2x)}\right)=\frac{1}{4}\Im\left( e^{i\frac 32}\int_{-\pi}^{\pi}e^{\frac i2\cos(x)}\right)\underbrace{=}_{(\star)}\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\Im (e^{i\frac 32})=\frac{\pi}{2}J_0\left(\frac{1}{2}\right)\sin\left(\frac{3}{2}\right)
$$</p>
<p>where $J_0(z)$ is a Bessel function of the first type and equation $(\star)$ frollows <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel.27s_integrals" rel="nofollow noreferrer">from here</a></p>
Answer 24: <p>I took an approach (below) but then dropped the idea to continue since the other answers were so neat and beautiful to follow and mine was a lot of work, which I shall complete at a later point of time (I hope at least). I hence write what I have so far below</p>
<p>\begin{align}
\int_0^{\pi/2}\cos^{2n}(2x)dx&=\int_0^{\pi/2}\cos^{2n}(x)dx\\
&=\frac12\text{Beta}(\frac12,n+\frac12)
\end{align}
hence \begin{align}
\int_0^{\pi/2}\sin(1+\cos^2x)\mathrm{d}x &=\int_0^{\pi/2}\sum_{m=0}^{\infty}\frac{(-1)^m(1+\cos^2x)^{2m+1}}{(2m+1)!}\mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2}(1+\cos^2x)^{2m+1}\\
&=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}\int_0^{\pi/2} \sum_{j=0}^{2m+1}{2k+1\choose j} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\int_0^{\pi/2} \cos^{2j}(x) \mathrm{d}x\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\text{Beta}(\frac12,j+\frac12)\\
&=\sum_{m=0}^{\infty}\frac{(-1)^k}{(2m+1)!}\sum_{j=0}^{2m+1}{2m+1\choose j}\frac12\pi\frac{(2j-1)!!}{2^jj!}\\
&=\frac12\pi\sum_{m=0}^{\infty}\sum_{j=0}^{2m+1}\frac{(-1)^k}{(2m+1-j)!}\frac{(2j-1)!!}{2^jj!^2}\\
\end{align}</p>
|
Limit of $\sin 2^n$ | 2,094,732 | 5 | https://math.stackexchange.com/questions/2094732/limit-of-sin-2n | <p>I am trying to show that
$$\lim_{n\to \infty}\sin 2^n$$
diverges for
$n \in \mathbb N$</p>
<p>I could show that assuming the limit converges, say to $L$ then</p>
<p>$$L=\lim_{n\to \infty}\sin 2^{n+1}$$
$$=2\lim_{n\to \infty}\sin 2^n\lim_{n\to \infty}\cos 2^n$$
$$=2L\lim_{n\to \infty}\cos 2^n$$</p>
<p>It cannot be
$$\lim_{n\to \infty}\cos 2^n=\frac{1}{2}$$
since it implies
$$\frac{1}{2}=\lim_{n\to \infty}\cos 2^{n+1}$$
$$=2(\lim_{n\to \infty}\cos 2^n)^2-1$$
$$=-\frac{1}{2}$$
So either $$\lim_{n\to \infty}\sin 2^n=0$$
or it diverges.</p>
<p>For the sequence to converge, necessarily it must be that $2^n$ gets arbitrarily close to $m\pi$ for some integer $m(n)$ as $n$ goes to infinity, which seems counterintuitive. But I couldn't prove it, so anyone has some good idea?</p>
| [
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
},
{
"answer_id": 2094755,
"score": 2,
"text": "<p>One can see with the double angle formula that</p>\n\n<p>$$\\sin2^{n+1}=2\\sin2^n\\cos2^n=\\pm2\\sin2^n\\sqrt{1-\\sin^22^n}\\approx\\pm2\\sin2^n$$</p>\n\n<p>So if $\\sin2^n$ gets close to $0$, $\\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\\sqrt3/2$, which is a contradiction to your statement.</p>\n\n<p>$$L=2L\\sqrt{1-L^2}$$</p>\n\n<hr>\n\n<p>edit:</p>\n\n<p>To be more specific, if $0<|\\sin2^n|<\\frac12$, then it follows from the above that</p>\n\n<p>$$\\sqrt3|\\sin2^n|<|\\sin2^{n+1}|<2|\\sin2^n|$$</p>\n"
},
{
"answer_id": 2094781,
"score": 2,
"text": "<p>If, for some $a$, $$\n\\lim\\limits_{n \\to \\infty} \\sin 2^n =\\sin a\n$$ Introduce $$\nb := \\pi/2 -a\n$$ Suppose, with both $0 <|\\epsilon_1| <|\\epsilon$ and $0 <|\\epsilon_2| < |\\epsilon|$, $$\n2^n =\\pm a \\pm 2b +\\epsilon_1 \\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n2^{n+1} =\\pm a \\pm 2b +\\epsilon_2 \\quad (\\mathrm{mod}\\; 2\\pi)\n$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.) \nThen by your construction, if we subtract them, either: $$\n2^n =\\pm 2a \\pm 2b +2|\\epsilon_3|\\quad (\\mathrm{mod}\\; 2\\pi) \\\\\n$$ where $0 <|\\epsilon_3| <|\\epsilon$. \nMake $\\epsilon$ so small that this is absurd, unless $a =0$ or $a =\\pi/2$. </p>\n\n<p>For $a=\\pi/2$, we know $2^n$ is $2k\\pi \\pm \\pi/2 +\\epsilon_4$. But then $2^{n+1} =4k\\pi +\\pi +2\\epsilon_4$ (or minor case, which is similar). Make $\\epsilon$ so small that this is absurd. </p>\n\n<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\\pi +\\epsilon_5$. If $\\epsilon_5 \\neq 0$, then times 2 so many times, that $\\epsilon_5$ exceeds $\\epsilon$.</p>\n\n<p>That $2^n =k\\pi$, for some $k$, is outrageous.</p>\n\n<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>\n"
},
{
"answer_id": 2094815,
"score": 1,
"text": "<p>$\\pi\\not\\in \\mathbb Q.$ Let $2^n=k_n\\pi +d_n$ where $k_n\\in \\mathbb Z$ and $0<|d_n|< \\pi /2.$</p>\n\n<p>Suppose $\\lim_{n\\to \\infty}\\sin 2^n=0.$ Then $\\lim_{n\\to \\infty} |d_n|=0.$</p>\n\n<p>If $n\\geq n_0\\implies |d_n|<\\pi /8,$ consider $j\\in \\mathbb N$ such that $2^j|d_{n_0}|<\\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\\cdot 2^j|d_{n_0}|<\\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>\n\n<p>Then $$\\pi /8<(1/4)\\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\\pi/8$$ a contradiction.</p>\n"
},
{
"answer_id": 2094842,
"score": 0,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$2^n=2\\pi\\frac{2^n}{2\\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\\dfrac{2^n}{2\\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\\dfrac1{2\\pi}$, a transcendental number.</p>\n\n<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>\n"
}
] | Question: Limit of $\sin 2^n$
Answer 1: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 2: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 3: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 4: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 5: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 6: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 7: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 8: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 9: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 10: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 11: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 12: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 13: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 14: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 15: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 16: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 17: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 18: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 19: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 20: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 21: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 22: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 23: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 24: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 25: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 26: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 27: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 28: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
Answer 29: <p>One can see with the double angle formula that</p>
<p>$$\sin2^{n+1}=2\sin2^n\cos2^n=\pm2\sin2^n\sqrt{1-\sin^22^n}\approx\pm2\sin2^n$$</p>
<p>So if $\sin2^n$ gets close to $0$, $\sin2^{n+1}$ will double and get farther from $0$. Indeed, for if the limit to exist, the double angle theorem says it must be $\sqrt3/2$, which is a contradiction to your statement.</p>
<p>$$L=2L\sqrt{1-L^2}$$</p>
<hr>
<p>edit:</p>
<p>To be more specific, if $0<|\sin2^n|<\frac12$, then it follows from the above that</p>
<p>$$\sqrt3|\sin2^n|<|\sin2^{n+1}|<2|\sin2^n|$$</p>
Answer 30: <p>If, for some $a$, $$
\lim\limits_{n \to \infty} \sin 2^n =\sin a
$$ Introduce $$
b := \pi/2 -a
$$ Suppose, with both $0 <|\epsilon_1| <|\epsilon$ and $0 <|\epsilon_2| < |\epsilon|$, $$
2^n =\pm a \pm 2b +\epsilon_1 \quad (\mathrm{mod}\; 2\pi) \\
2^{n+1} =\pm a \pm 2b +\epsilon_2 \quad (\mathrm{mod}\; 2\pi)
$$ (They may be different numbers, and not necessarily same sign, so there are 4 signs.)
Then by your construction, if we subtract them, either: $$
2^n =\pm 2a \pm 2b +2|\epsilon_3|\quad (\mathrm{mod}\; 2\pi) \\
$$ where $0 <|\epsilon_3| <|\epsilon$.
Make $\epsilon$ so small that this is absurd, unless $a =0$ or $a =\pi/2$. </p>
<p>For $a=\pi/2$, we know $2^n$ is $2k\pi \pm \pi/2 +\epsilon_4$. But then $2^{n+1} =4k\pi +\pi +2\epsilon_4$ (or minor case, which is similar). Make $\epsilon$ so small that this is absurd. </p>
<p>It remains to deal with $a=0$. Here, similarly, $2^n =k\pi +\epsilon_5$. If $\epsilon_5 \neq 0$, then times 2 so many times, that $\epsilon_5$ exceeds $\epsilon$.</p>
<p>That $2^n =k\pi$, for some $k$, is outrageous.</p>
<p>(A bit hard to write very clearly, but draw a graph and you will get it. If someone finds a better way to put it, edit by all means.)</p>
Answer 31: <p>$\pi\not\in \mathbb Q.$ Let $2^n=k_n\pi +d_n$ where $k_n\in \mathbb Z$ and $0<|d_n|< \pi /2.$</p>
<p>Suppose $\lim_{n\to \infty}\sin 2^n=0.$ Then $\lim_{n\to \infty} |d_n|=0.$</p>
<p>If $n\geq n_0\implies |d_n|<\pi /8,$ consider $j\in \mathbb N$ such that $2^j|d_{n_0}|<\pi /2<2^{j+1}|d_{n_0}|.$ Then $$2^{j-1}|d_{n_0}|=(1/2)\cdot 2^j|d_{n_0}|<\pi /4$$ so we have$$|d_{n_0+j-1}|=2^{j-1}|d_{n_0}|.$$</p>
<p>Then $$\pi /8<(1/4)\cdot 2^{j+1}|d_{n_0}|=2^{j-1}|d_{n_0}|=|d_{n_0+j-1}|<\pi/8$$ a contradiction.</p>
Answer 32: <p><strong>Hint</strong>:</p>
<p>$$2^n=2\pi\frac{2^n}{2\pi}$$ so that the quadrant of the angle $2^n$ is given by the first two bits of the fractional part of $\dfrac{2^n}{2\pi}$, which are nothing but the $n^{th}$ and $n+1^{th}$ bits of the fractional part of $\dfrac1{2\pi}$, a transcendental number.</p>
<p>If you can show that all bit pairs occur infinitely many times, you are done.</p>
|
Deeper meaning of "canonical"? | 2,094,826 | 5 | https://math.stackexchange.com/questions/2094826/deeper-meaning-of-canonical | <p>For years I thought "canonical" for isomorphisms just means something like "obvious", "without arbitrarily choice", "simple" depending on the context, and I accepted the idea that there is no deeper meaning of "canonical" and no clear definition.</p>
<p>Reading about categories I noticed that "canonical" is sometimes used when there's a "natural equivalence" in the language of category theory. So my question: </p>
<p>Can we clearly define "canonical" by means of category theory in an abstract way, so that the situations when we use "canonical" in our "real" mathematician life are special cases of this abstract situation?</p>
| [
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
},
{
"answer_id": 2096437,
"score": 2,
"text": "<p>IMO, a \"canonical map\" is one <em>implied by context</em>.</p>\n\n<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\\pi : R \\to Q$ whose kernel is $I$. </p>\n\n<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\\pi$. Then, if we later need $\\pi$, we name it as the \"canonical map\" $R \\to Q$.</p>\n\n<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href=\"http://cstheory.stackexchange.com\">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>\n"
},
{
"answer_id": 2095583,
"score": 1,
"text": "<p>\"Natural\" in the context of category theory means natural transformation. A \"natural isomorphism\" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like \"$\\text{Hom}(FA,B) \\cong \\text{Hom}(A,UB)$ is natural in $A$ and $B$\" by which we mean $(A,B)\\mapsto\\text{Hom}(FA,B)$ and $(A,B)\\mapsto\\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often \"natural\" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \\mapsto 42x$ is a natural transformation.</p>\n\n<p>\"Canonical\" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a \"canonical map\" is the notion of a <a href=\"https://en.wikipedia.org/wiki/Universal_property\" rel=\"nofollow noreferrer\">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href=\"http://www.hedonisticlearning.com/posts/styles-of-category-theory.html\" rel=\"nofollow noreferrer\">here</a>. We say a functor $F$ is <strong>representable</strong> if $\\text{Hom}(X,-) \\cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\\varphi:\\text{Hom}(X,-)\\cong F$, then $\\varphi(id)\\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \\cong X'$ and that isomorphism is unique.</p>\n\n<p>The two examples of \"canonical maps\" from Omnomnomnom's second <a href=\"https://en.wikipedia.org/wiki/Canonical_map\" rel=\"nofollow noreferrer\">link</a> are of this form. (Note, the other two examples are examples of \"structure maps\".) This may not be obvious, so let me spell it out.</p>\n\n<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\\equiv\\{f\\in\\text{Hom}(G,H)\\mid \\forall x\\in N.f(x) = 0\\}$. Then it is the case that $\\text{Hom}(G/N,-)\\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\\in\\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\\in N$. In particular, any $f \\in F(H)$ factors through $q$ uniquely.</p>\n\n<p>The second example says there is a natural transformation $\\lambda : V \\to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \\equiv V \\multimap k$ is the duality with $V\\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\\text{Hom}(U,V\\multimap W)\\cong\\text{Hom}(V,U\\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\\equiv\\text{Hom}(V,U\\multimap k)=\\text{Hom}(V,U^*)$. $\\text{Hom}(-,V^*)\\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\\lambda_V \\in F_V(V^*) = \\text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\\lambda : Id \\to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>\n"
}
] | Question: Deeper meaning of "canonical"?
Answer 1: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 2: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 3: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 4: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 5: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 6: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 7: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 8: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 9: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 10: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 11: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 12: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 13: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 14: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
Answer 15: <p>IMO, a "canonical map" is one <em>implied by context</em>.</p>
<p>Language and notation is full of shorthand and abbreviations; it's far too cumbersome to say <em>everything</em>. For example, if we have a ring $R$ and an ideal $I$, the <em>quotient</em> is some ring $Q$ together with a surjective map $\pi : R \to Q$ whose kernel is $I$. </p>
<p>However, the usual language is just say that $Q$ is the quotient, without any mention of $\pi$. Then, if we later need $\pi$, we name it as the "canonical map" $R \to Q$.</p>
<p>I'm not aware of a theory that tries to capture the notion — your best bet on that, I think, would be to go over to <a href="http://cstheory.stackexchange.com">cstheory.stackexchange.com</a> and ask if there is a theory underpinning <em>implicit conversions</em>.</p>
Answer 16: <p>"Natural" in the context of category theory means natural transformation. A "natural isomorphism" is just an isomorphism between functors, i.e. mutually inverse natural transformations. Sometimes we say something like "$\text{Hom}(FA,B) \cong \text{Hom}(A,UB)$ is natural in $A$ and $B$" by which we mean $(A,B)\mapsto\text{Hom}(FA,B)$ and $(A,B)\mapsto\text{Hom}(A,UB)$ are isomorphic as functors. While natural transformations are indeed often "natural" in the non-technical sense, there are a lot of them, and they can be quite arbitrary. For example, in the category of commutative monoids (or groups), $x \mapsto 42x$ is a natural transformation.</p>
<p>"Canonical" doesn't have a widely used technical meaning as far as I'm aware. (It does have <em>some</em> technical definitions, but the ones I'm aware of are not widely used.) The categorical notion that unambiguously comes closest to capturing the notion of a "canonical map" is the notion of a <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow noreferrer">universal arrow</a>. I'll use a variation on the presentation using a notion called a universal element. I describe the relation between these notions <a href="http://www.hedonisticlearning.com/posts/styles-of-category-theory.html" rel="nofollow noreferrer">here</a>. We say a functor $F$ is <strong>representable</strong> if $\text{Hom}(X,-) \cong F$ for a given object $X$. (Note, that in words this says $F$ is naturally isomorphic to a hom-functor.) We'll equivalently say that $X$ <strong>represents</strong> $F$. A <strong>universal element</strong> is simply the image of $id$ under this isomorphism, i.e. if $\varphi:\text{Hom}(X,-)\cong F$, then $\varphi(id)\in F(X)$ is the universal element. In the (very common, in fact necessary) case that $F$ itself is some sort of hom-functor, then the universal element will be an arrow in some category and that arrow is the universal arrow. An excellent exercise is to show that if $X'$ also represents $F$ then $X \cong X'$ and that isomorphism is unique.</p>
<p>The two examples of "canonical maps" from Omnomnomnom's second <a href="https://en.wikipedia.org/wiki/Canonical_map" rel="nofollow noreferrer">link</a> are of this form. (Note, the other two examples are examples of "structure maps".) This may not be obvious, so let me spell it out.</p>
<p>The first example is the quotient group construction. Let $N$ be a normal subgroup of a group $G$. Define $F(H)\equiv\{f\in\text{Hom}(G,H)\mid \forall x\in N.f(x) = 0\}$. Then it is the case that $\text{Hom}(G/N,-)\cong F$. (Prove this. Don't forget naturality.) The universal element is a group homomorphism $q\in\text{Hom}(G,G/N)=F(G/N)$ for which $q(x)=0$ iff $x\in N$. In particular, any $f \in F(H)$ factors through $q$ uniquely.</p>
<p>The second example says there is a natural transformation $\lambda : V \to V^{**}$ natural in $V$ where $V$ is a vector space over the field $k$, and $V^* \equiv V \multimap k$ is the duality with $V\multimap W$ being the vector space of linear functions from $V$ to $W$. We have the following adjunction $$\text{Hom}(U,V\multimap W)\cong\text{Hom}(V,U\multimap W)$$ natural in $U$, $V$, and $W$. (Prove this.) Set $F_V(U)\equiv\text{Hom}(V,U\multimap k)=\text{Hom}(V,U^*)$. $\text{Hom}(-,V^*)\cong F_V$. (This looks slightly different from our definition of representability, but it's actually the same. Why?) The universal element is then a map $\lambda_V \in F_V(V^*) = \text{Hom}(V,V^{**})$. These arrows indexed by $V$ form the components of a natural transformation $\lambda : Id \to (-)^{**}$. In fact, this result can be proved generally, and the general result is a corollary to a result called parameterized representability.</p>
|
For $x+y=1$, show that $x^4+y^4\ge \frac{1}{8}$ | 2,095,390 | 5 | https://math.stackexchange.com/questions/2095390/for-xy-1-show-that-x4y4-ge-frac18 | <p>As in the title. Let $x,y$ be two real numbers such that $x+y=1$. Prove that $x^4+y^4\ge \frac{1}{8}$.</p>
<p>Any hints? Basically, the only method I am aware of is plugging $y=1-x$ into the inequality and investigating the extrema of the function, but I don't think it's the best method. I'm looking for a <em>cleverer</em> way to prove that inequality.</p>
| [
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
},
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
},
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
},
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
},
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
},
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
},
{
"answer_id": 2095403,
"score": 6,
"text": "<p>It is equivalent to showing that $(1/2+\\epsilon)^4+(1/2-\\epsilon)^4\\geq 1/8$. But this simplifies to $\\tfrac{1}{8}(16\\epsilon^4+12\\epsilon^2+1)$, which obviously is minimized at $\\epsilon=0$, giving $\\tfrac{1}{8}$.</p>\n\n<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>\n"
},
{
"answer_id": 2095402,
"score": 4,
"text": "<p>Also a Cauchy-Schwarz approach works:\n\\begin{align*}\n1 = x + y \\le \\sqrt{(x^2+y^2)(1^2+1^2)} \\implies x^2 + y^2 \\ge \\frac{1}{2} \\\\\n\\frac{1}{2} \\le x^2 + y^2 \\le \\sqrt{(x^4 + y^4)(1^2 + 1^2)} \\implies x^4 + y^4 \\ge \\frac{1}{8}\n\\end{align*}</p>\n"
},
{
"answer_id": 2095398,
"score": 3,
"text": "<p>If $a,b\\in \\mathbb{R}$, then $(a+b)^2\\leq 2(a^2+b^2)$ since $2ab\\leq a^2+b^2$.</p>\n\n<p>Applying this with $x$ and $y$ we get\n$$ 1=(x+y)^2\\leq 2(x^2+y^2)$$\nso $x^2+y^2\\geq \\frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get\n$$ \\frac{1}{4}\\leq (x^2+y^2)^2\\leq 2(x^4+y^4)$$\nwhich is the desired result.</p>\n"
},
{
"answer_id": 2095400,
"score": 3,
"text": "<p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>\n"
},
{
"answer_id": 2095399,
"score": 2,
"text": "<p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\\dfrac{a^2+b^2}{2} \\ge \\left(\\dfrac{a+b}{2}\\right)^2$ twice. </p>\n"
},
{
"answer_id": 2095466,
"score": 2,
"text": "<p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>\n\n<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>\n\n<p>By symmetry $x=y=\\frac 12$ and hence $$k=\\left(\\frac 12\\right)^2+\\left(\\frac 12\\right)^2=\\frac 18$$</p>\n"
},
{
"answer_id": 2095408,
"score": 1,
"text": "<p>$$(x+y)^4=1 \\\\\nx^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>\n\n<p>Now, by AM-GM we have\n$$x^3y \\leq \\frac{x^4+x^4+x^4+y^4}{4}\\\\\nx^2y^2 \\leq \\frac{x^4+y^4}{2}\\\\\nxy^3 \\leq \\frac{x^4+y^4+y^4+y^4}{4}\\\\\n$$</p>\n\n<p>Thus\n$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \\leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>\n"
},
{
"answer_id": 2095535,
"score": 1,
"text": "<p>If $x\\le0$, then $y\\ge1$, so $x^4+y^4\\ge1$. Hence we can assume $x,y>0$.</p>\n\n<p>For $x,y>0$, $x\\ne y$, the function\n$$\n\\mu(x,y;t)=\\begin{cases}\n\\left(\\dfrac{x^t+y^t}{2}\\right)^{1/t} & \\text{if $t\\ne0$} \\\\[6px]\n\\sqrt{xy\\vphantom{X}} & \\text{if $t=0$}\n\\end{cases}\n$$\nis continuous and increasing in the variable $t$. Thus\n$$\n\\mu(x,y;1)<\\mu(x,y;4)\n$$\nthat is\n$$\n\\frac{x+y}{2}<\\left(\\dfrac{x^4+y^4}{2}\\right)^{1/4}\n$$\nand, if $x+y=1$,\n$$\n\\frac{1}{16}<\\frac{x^4+y^4}{2}\n$$</p>\n\n<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>\n\n<hr>\n\n<p>The proof that $\\mu(x,y;t)$ is increasing (for $x\\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{1/p}<\n\\left(\\dfrac{x^q+y^q}{2}\\right)^{1/q}\n$$\nthat is,\n$$\n\\left(\\dfrac{x^p+y^p}{2}\\right)^{q/p}<\\dfrac{x^q+y^q}{2}\n$$\nSet $u=x^p$ and $v=y^p$; then the inequality becomes\n$$\n\\left(\\dfrac{u+v}{2}\\right)^{q/p}<\\dfrac{u^{q/p}+v^{q/p}}{2}\n$$\nwhich is a consequence of $z\\mapsto z^{p/q}$ being convex (on $(0,\\infty)$).</p>\n\n<p>Since it's immediate that $\\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\\infty)$.</p>\n\n<p>Now notice that\n$$\n\\mu(x,y;-t)=\\mu(x^{-1},y^{-1};t)^{-1}\n$$\nand we can conclude that the function is also increasing over $(-\\infty,0]$.</p>\n\n<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\\mu(x,y;0)<\\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>\n\n<p>The inequalities become nonstrict if and only if $x=y$.</p>\n\n<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\\mu(x,y;1)\\le\\mu(x,y;t)$, that is\n$$\nx^t+y^t\\ge\\frac{1}{2^{t-1}}\n$$</p>\n\n<p>If instead $0<t<1$, we have $\\mu(x,y;t)\\le\\mu(x,y;1)$, so\n$$\nx^t+y^t\\le 2^{1-t}\n$$</p>\n"
},
{
"answer_id": 2095629,
"score": 1,
"text": "<p>use Holder inequality we have\n$$(x^4+y^4)(1+1)(1+1)(1+1)\\ge (x+y)^4$$</p>\n"
}
] | Question: For $x+y=1$, show that $x^4+y^4\ge \frac{1}{8}$
Answer 1: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 2: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 3: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 4: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 5: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 6: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 7: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 8: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 9: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
Answer 10: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 11: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 12: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 13: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 14: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 15: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 16: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 17: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 18: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
Answer 19: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 20: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 21: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 22: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 23: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 24: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 25: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 26: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 27: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
Answer 28: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 29: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 30: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 31: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 32: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 33: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 34: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 35: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 36: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
Answer 37: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 38: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 39: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 40: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 41: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 42: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 43: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 44: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 45: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
Answer 46: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 47: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 48: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 49: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 50: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 51: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 52: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 53: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 54: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
Answer 55: <p>It is equivalent to showing that $(1/2+\epsilon)^4+(1/2-\epsilon)^4\geq 1/8$. But this simplifies to $\tfrac{1}{8}(16\epsilon^4+12\epsilon^2+1)$, which obviously is minimized at $\epsilon=0$, giving $\tfrac{1}{8}$.</p>
<p>Edit: This is motivated by <em>guessing</em> the minimum, i.e. $x=y=1/2$, and then symmetrizing. The symmetry allows a nice expansion using the binomial theorem.</p>
Answer 56: <p>Also a Cauchy-Schwarz approach works:
\begin{align*}
1 = x + y \le \sqrt{(x^2+y^2)(1^2+1^2)} \implies x^2 + y^2 \ge \frac{1}{2} \\
\frac{1}{2} \le x^2 + y^2 \le \sqrt{(x^4 + y^4)(1^2 + 1^2)} \implies x^4 + y^4 \ge \frac{1}{8}
\end{align*}</p>
Answer 57: <p>If $a,b\in \mathbb{R}$, then $(a+b)^2\leq 2(a^2+b^2)$ since $2ab\leq a^2+b^2$.</p>
<p>Applying this with $x$ and $y$ we get
$$ 1=(x+y)^2\leq 2(x^2+y^2)$$
so $x^2+y^2\geq \frac{1}{2}$, and then applying the inequality again with $x^2$ and $y^2$ we get
$$ \frac{1}{4}\leq (x^2+y^2)^2\leq 2(x^4+y^4)$$
which is the desired result.</p>
Answer 58: <p>$x^4$ and $y^4$ are convex functions of $x$ and $y$, so their sum is also convex. The restriction of this to the line $x+y=1$ is again convex. By symmetry, the minimum must occur at the point where $x=y$.</p>
Answer 59: <p><strong>hint</strong>: I posted a solution to this same question a year ago, and I think you can search it here. You can use the inequality $\dfrac{a^2+b^2}{2} \ge \left(\dfrac{a+b}{2}\right)^2$ twice. </p>
Answer 60: <p>If you are looking for a more geometrical explanation, consider the family of curves $$x^4+y^4=k$$</p>
<p>These are a family of concentric rounded square shaped curves with order 4 symmetry centred at the origin. The line $y=x$ is a line of symmetry. We need to find the value of $k$ for which such a curve is tangent to the line $x+y=1$.</p>
<p>By symmetry $x=y=\frac 12$ and hence $$k=\left(\frac 12\right)^2+\left(\frac 12\right)^2=\frac 18$$</p>
Answer 61: <p>$$(x+y)^4=1 \\
x^4+4x^3y+6x^2y^2+4x^3+y^4 =1$$</p>
<p>Now, by AM-GM we have
$$x^3y \leq \frac{x^4+x^4+x^4+y^4}{4}\\
x^2y^2 \leq \frac{x^4+y^4}{2}\\
xy^3 \leq \frac{x^4+y^4+y^4+y^4}{4}\\
$$</p>
<p>Thus
$$1=x^4+4x^3y+6x^2y^2+4x^3+y^4 \leq x^3+(3x^4+y^4)+3(x^4+y^4)+(x^4+3y^4)+y^4$$</p>
Answer 62: <p>If $x\le0$, then $y\ge1$, so $x^4+y^4\ge1$. Hence we can assume $x,y>0$.</p>
<p>For $x,y>0$, $x\ne y$, the function
$$
\mu(x,y;t)=\begin{cases}
\left(\dfrac{x^t+y^t}{2}\right)^{1/t} & \text{if $t\ne0$} \\[6px]
\sqrt{xy\vphantom{X}} & \text{if $t=0$}
\end{cases}
$$
is continuous and increasing in the variable $t$. Thus
$$
\mu(x,y;1)<\mu(x,y;4)
$$
that is
$$
\frac{x+y}{2}<\left(\dfrac{x^4+y^4}{2}\right)^{1/4}
$$
and, if $x+y=1$,
$$
\frac{1}{16}<\frac{x^4+y^4}{2}
$$</p>
<p>For $x=y$, the inequality is obvious (and is an equality, actually).</p>
<hr>
<p>The proof that $\mu(x,y;t)$ is increasing (for $x\ne y$) is an application of convexity. Suppose $0<p<q$; we want to prove that
$$
\left(\dfrac{x^p+y^p}{2}\right)^{1/p}<
\left(\dfrac{x^q+y^q}{2}\right)^{1/q}
$$
that is,
$$
\left(\dfrac{x^p+y^p}{2}\right)^{q/p}<\dfrac{x^q+y^q}{2}
$$
Set $u=x^p$ and $v=y^p$; then the inequality becomes
$$
\left(\dfrac{u+v}{2}\right)^{q/p}<\dfrac{u^{q/p}+v^{q/p}}{2}
$$
which is a consequence of $z\mapsto z^{p/q}$ being convex (on $(0,\infty)$).</p>
<p>Since it's immediate that $\mu(x,y;t)$ is continuous at $0$, we have that it is increasing over $[0,\infty)$.</p>
<p>Now notice that
$$
\mu(x,y;-t)=\mu(x^{-1},y^{-1};t)^{-1}
$$
and we can conclude that the function is also increasing over $(-\infty,0]$.</p>
<p>This is a “generalized AM-GM” inequality, which is the simple observation that $\mu(x,y;0)<\mu(x,y;1)$. For $t=-1$ we get the harmonic mean.</p>
<p>The inequalities become nonstrict if and only if $x=y$.</p>
<p>More generally, then, for $x+y=1$, $x,y>0$ and $t>1$, we have $\mu(x,y;1)\le\mu(x,y;t)$, that is
$$
x^t+y^t\ge\frac{1}{2^{t-1}}
$$</p>
<p>If instead $0<t<1$, we have $\mu(x,y;t)\le\mu(x,y;1)$, so
$$
x^t+y^t\le 2^{1-t}
$$</p>
Answer 63: <p>use Holder inequality we have
$$(x^4+y^4)(1+1)(1+1)(1+1)\ge (x+y)^4$$</p>
|
types of distances between points | 2,095,431 | 5 | https://math.stackexchange.com/questions/2095431/types-of-distances-between-points | <p>I was just finished working on a project where we had to allocate facilities in a $2$-dimensional plane in order to satisfy certain demand restrictions while keeping the cost at minimum. I do NOT have a degree in mathematics so my apologies if my question may seem a little dumb and uneducated. We worked with cartesian coordinates and a distance function $$d:\mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$$ with the distance between $(x_1,y_1)$ and $(x_2,y_2)$ given by $\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$. In the assignment it was specifically stated that we used the euclidean distance that I started asking myself: what other types of distances measurements could we use, I mean, like absolute deviation? This brings my to my question: </p>
<blockquote>
<p><em>What other primary distance measure(s) exist(s) and what are possible applications of these?</em></p>
</blockquote>
<p>I am curious to seewhat you guys can bring me!</p>
| [
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
},
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
},
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
},
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
},
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
},
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
},
{
"answer_id": 2095445,
"score": 2,
"text": "<p>The term you're looking for is a <em>metric</em>. <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)\" rel=\"nofollow noreferrer\">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>\n\n<p>A fun example is the so-called \"<a href=\"https://en.wikipedia.org/wiki/Taxicab_geometry\" rel=\"nofollow noreferrer\">taxicab metric</a>\", or $L^1$, where the distance function (or metric) is given by </p>\n\n<p>$$d(\\mathbf{x,y})=\\sum_{i=1}^n|x_i-y_i|,$$\nwhere $\\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href=\"https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions\" rel=\"nofollow noreferrer\">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href=\"https://www.youtube.com/watch?v=ineO1tIyPfM\" rel=\"nofollow noreferrer\">a nice video</a> on the subject. </p>\n"
},
{
"answer_id": 2095449,
"score": 2,
"text": "<p>There is a set of related possible distance measures (which some call \"metrics\") of the form</p>\n\n<p>$$d(a,b) = \\left( |(a_x-b_x)|^\\ell + (a_y-b_y)|^\\ell \\right)^\\frac1{\\ell}$$</p>\n\n<p>The distance between a point and the origin in such a metric is often called the \"$\\ell$-norm, and when $\\ell = 2$ you get the Euclidean metric, while as $\\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>\n\n<p>WHen $\\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the \"Manhatan metric\" because it is the distance a taxi would have to travel on orthogonal streets.</p>\n\n<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>\n\n<p>$$d(a,b) + d(b,c) \\geq d(a,c)$$</p>\n"
}
] | Question: types of distances between points
Answer 1: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 2: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
Answer 3: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 4: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
Answer 5: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 6: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
Answer 7: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 8: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
Answer 9: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 10: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
Answer 11: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 12: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
Answer 13: <p>The term you're looking for is a <em>metric</em>. <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow noreferrer">From the wiki</a>: <em>A metric or distance function is a function that defines a distance between each pair of elements of a set.</em> </p>
<p>A fun example is the so-called "<a href="https://en.wikipedia.org/wiki/Taxicab_geometry" rel="nofollow noreferrer">taxicab metric</a>", or $L^1$, where the distance function (or metric) is given by </p>
<p>$$d(\mathbf{x,y})=\sum_{i=1}^n|x_i-y_i|,$$
where $\bf{x,y}$ are $n$-dimensional vectors. $L^2$ is the usual Euclidean metric (<a href="https://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions" rel="nofollow noreferrer">here</a> is a link on $L^p$) and there are many others. PBS Infinite Series recently made <a href="https://www.youtube.com/watch?v=ineO1tIyPfM" rel="nofollow noreferrer">a nice video</a> on the subject. </p>
Answer 14: <p>There is a set of related possible distance measures (which some call "metrics") of the form</p>
<p>$$d(a,b) = \left( |(a_x-b_x)|^\ell + (a_y-b_y)|^\ell \right)^\frac1{\ell}$$</p>
<p>The distance between a point and the origin in such a metric is often called the "$\ell$-norm, and when $\ell = 2$ you get the Euclidean metric, while as $\ell$ goes to infinity, you get a metric which is the larger of the $x$ or $y$ separations. </p>
<p>WHen $\ell = 1$ you get the sum of $|(a_x-b_x)| + |(a_y-b_y)|$, sometimes called the "Manhatan metric" because it is the distance a taxi would have to travel on orthogonal streets.</p>
<p>But there are plenty of other metrics as well. One property that they all share is the triangle identity:</p>
<p>$$d(a,b) + d(b,c) \geq d(a,c)$$</p>
|
Eigenvectors of complex matrix | 2,095,575 | 5 | https://math.stackexchange.com/questions/2095575/eigenvectors-of-complex-matrix | <p>I'm working on a problem where I am trying to find the eigenvectors of a pretty complicated matrix, and I am in need of some assistance. The matrix in question is:</p>
<p>$$A =\begin{bmatrix}
\sin(x) & \cos(x)\cos(y) - i\cos(x)\sin(y)\\
\cos(x)\cos(y) + i\cos(x)\sin(y) & -\sin(x)\\
\end{bmatrix}$$</p>
<p>I know that the matrix is Hermitian, so that it is equal to its own conjugate transpose. Moreover, the eigenvalues are $\lambda = \pm 1$, as $A^2 = I$. However, I'm not sure how to use these properties to find the possible eigenvectors (if that would even help), and I would like to avoid doing it by brute force if possible, as it seem unruly. </p>
<p>Thus far, I have tried to separate the matrix into real and imaginary parts, but that didn't seem to help. I also had the thought to assume diagonalization in an attempt to find the diagonalizing unitary matrix (and, in turn, the eigenvectors), but I don't see that making things much nicer either. Any help would be greatly appreciated.</p>
| [
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
},
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
},
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
},
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
},
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
},
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
},
{
"answer_id": 2095585,
"score": 4,
"text": "<p>The determinant is\n$$\n-\\sin^2x-(\\cos^2x\\cos^2y+\\cos^2x\\sin^2y)=-1\n$$\nso the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>\n\n<p>An eigenvector relative to $1$ is a nonzero solution of\n$$\n\\begin{bmatrix}\n\\sin x - 1 & \\cos x\\cos y−i\\cos x\\sin y\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta\n\\end{bmatrix}=0\n$$\nso an eigenvector is\n$$\n\\begin{bmatrix}\n\\cos x\\cos y−i\\cos x\\sin y \\\\\n1-\\sin x\n\\end{bmatrix}\n$$\nunless $\\sin x=1$ when an eigenvector is\n$$\n\\begin{bmatrix}1 \\\\ 1\\end{bmatrix}\n$$</p>\n\n<p>For the $-1$ eigenvector, do similarly.</p>\n"
},
{
"answer_id": 2095818,
"score": 1,
"text": "<p>Note that\n$$\nA=DQD^{-1},\\ D=\\pmatrix{1\\\\ &e^{iy}},\\ Q=\\pmatrix{\\sin x&\\cos x\\\\ \\cos x&-\\sin x}.\n$$\nIt follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>\n"
}
] | Question: Eigenvectors of complex matrix
Answer 1: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 2: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
Answer 3: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 4: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
Answer 5: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 6: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
Answer 7: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 8: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
Answer 9: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 10: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
Answer 11: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 12: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
Answer 13: <p>The determinant is
$$
-\sin^2x-(\cos^2x\cos^2y+\cos^2x\sin^2y)=-1
$$
so the characteristic polynomial is $X^2-1$, because the trace is $0$.</p>
<p>An eigenvector relative to $1$ is a nonzero solution of
$$
\begin{bmatrix}
\sin x - 1 & \cos x\cos y−i\cos x\sin y
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta
\end{bmatrix}=0
$$
so an eigenvector is
$$
\begin{bmatrix}
\cos x\cos y−i\cos x\sin y \\
1-\sin x
\end{bmatrix}
$$
unless $\sin x=1$ when an eigenvector is
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}
$$</p>
<p>For the $-1$ eigenvector, do similarly.</p>
Answer 14: <p>Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.</p>
|
Convert this word problem into a system of equations | 2,095,730 | 5 | https://math.stackexchange.com/questions/2095730/convert-this-word-problem-into-a-system-of-equations | <blockquote>
<p>When a school play charges <span class="math-container">$\$2$</span> for admission, an average of <span class="math-container">$100$</span> people attend. For each <span class="math-container">$10¢$</span> increase in admission price, the average number decreases by <span class="math-container">$1$</span>. What charge would make the most money?</p>
</blockquote>
<p>How do I convert this into a system of equations?</p>
<p>Note: For the question the system is supposed to be a parabola equation in vertex form, then find the maximum that way. For example:</p>
<blockquote>
<blockquote>
<p>A rancher is fencing off a rectangular area with a fixed perimeter of <span class="math-container">$76$</span>m. What dimensions would yield the maximum area? Find the maximum area.</p>
</blockquote>
<p>So then for this the equation would be <span class="math-container">$f(x)=3(x^2-8)+50$</span>, solving by completing the square yields the answer of <span class="math-container">$19\times19; 361$</span>.</p>
</blockquote>
| [
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
},
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
},
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
},
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
},
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
},
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
},
{
"answer_id": 2095737,
"score": 1,
"text": "<p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>\n"
},
{
"answer_id": 2095741,
"score": 1,
"text": "<p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>\n\n<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \\$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\\times$ S = 2 $\\times$ 100 = \\$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \\$2.10 affect profits, or \\$2.40?</p>\n"
},
{
"answer_id": 2095785,
"score": 1,
"text": "<p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>\n\n<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\\frac{-1}{0.10}$. </p>\n\n<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>\n\n<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>\n\n<p>$$xP(x)$$</p>\n"
}
] | Question: Convert this word problem into a system of equations
Answer 1: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 2: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 3: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
Answer 4: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 5: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 6: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
Answer 7: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 8: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 9: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
Answer 10: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 11: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 12: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
Answer 13: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 14: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 15: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
Answer 16: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 17: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 18: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
Answer 19: <p>Start with the basic equation: number of people by cost. Modify the equation to add the variation of people and price. do the maximisation of the function and you get what is the maximum value</p>
Answer 20: <p>In mathematics, variables are abstract concepts that stand for concrete ideas or objects. It really does not matter what variables you use, as long as your convention is consistent and understandable. </p>
<p>Let us allow $P$ to stand for the profit the school makes from a play, $N$ the average number of people and $S$ the price of the tickets. We know from the problem that when the school charges \$2 per ticket (S = 2), an average of 100 (N = 100) people show up. At this price the school makes a profit of P = N $\times$ S = 2 $\times$ 100 = \$200. Now, try to figure out how changing the price of the ticket affects the profit the school makes. How does changing the price to \$2.10 affect profits, or \$2.40?</p>
Answer 21: <p>$2$ dollars corresponds to $100$ people. It would be wise to consider this information as the point $(2,100)$ to emphasize that they go together. Notice we chose that dollars goes with the $x$ coordinate, it doesn't matter, but we have to be consistent and logical in everything we say. </p>
<p>Given that dollars are corresponding to the $x$ coordinates, and the information they gave you then the slope of the graph of number of people $P$ vs the price set $x$ is $\frac{-1}{0.10}$. </p>
<p>I think now you know how to set up a function $P(x)$, we have a point on the line and the slope of the line.</p>
<p>Now that you know the amount of people that will correspond to a set price $x$, the money in hand is just,</p>
<p>$$xP(x)$$</p>
|
Find the maximum value of the expression ${\frac {x}{1+x^2}} + {\frac {y}{1+y^2}}+{\frac {z}{1+z^2}}$. | 2,095,843 | 5 | https://math.stackexchange.com/questions/2095843/find-the-maximum-value-of-the-expression-frac-x1x2-frac-y1y2 | <blockquote>
<p>Find the maximum value of the expression :
<span class="math-container">$${\frac {x}{1+x^2}} + {\frac {y}{1+y^2}}+{\frac {z}{1+z^2}}$$</span>
where <span class="math-container">$x,y,z$</span> are <em>real</em> numbers satisfying the condition that <span class="math-container">$x+y+z=1$</span>.</p>
</blockquote>
<p>Taking <span class="math-container">$x=y=z=\frac {1}{3}$</span>, I get the expression as <span class="math-container">$\frac {3x}{1+x^2}$</span>, which is equal to <span class="math-container">$\frac {1}{1+{\frac{1}{9}}}$</span> or <span class="math-container">$\frac {9}{10}$</span>.</p>
<p>How can I actually solve the problem without making unnecessary assumptions ?</p>
| [
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
},
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
},
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
},
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
},
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
},
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
},
{
"answer_id": 2095932,
"score": 5,
"text": "<p>We use the method of <a href=\"https://en.wikipedia.org/wiki/Lagrange_multiplier\" rel=\"noreferrer\">Lagrange Multipliers</a>.</p>\n\n<p>First, we define the function to maximize as $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>\n\n<p>Thus, the Lagrange function is:</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=f(x,y,z)+\\lambda\\cdot g(x,y,z)$$</p>\n\n<p>$$\\mathcal{L}(x,y,z,\\lambda)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}+\\lambda(x+y+z-1) \\tag{1}$$</p>\n\n<p>Where $\\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>\n\n<p>$$\\frac{\\partial \\mathcal{L}}{\\partial x}=\\frac{1-x^2}{(x^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial y}=\\frac{1-y^2}{(y^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial z}=\\frac{1-z^2}{(z^2+1)^2}+\\lambda=0$$\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}=x+y+z-1=0$$</p>\n\n<p>Solving this system of equations will yield values for $x,y,z,\\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>\n\n<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>\n\n<p>$x=y=z=\\frac{1}{3}$ and $\\lambda=-0.72$, as you have correctly predicted.</p>\n\n<p>Thus, the maximum value of $f(x,y,z)=\\frac{x}{1+x^2}+\\frac{y}{1+y^2}+\\frac{z}{1+z^2}=\\frac{9}{10}$.</p>\n"
},
{
"answer_id": 2130873,
"score": 2,
"text": "<p>Let $x=\\frac{a}{3}$, $y=\\frac{b}{3}$ and $z=\\frac{c}{3}$.</p>\n\n<p>Hence, $a+b+c=3$ and we need to prove that $$\\sum_{cyc}\\frac{a}{9+a^2}\\leq\\frac{3}{10}$$ or\n$$\\sum_{cyc}\\left(\\frac{1}{10}-\\frac{a}{9+a^2}\\right)\\geq0$$ or\n$$\\sum_{cyc}\\frac{a^2-10a+9}{9+a^2}\\geq0$$ or\n$$\\sum_{cyc}\\left(\\frac{a^2-10a+9}{9+a^2}+\\frac{2}{3}\\right)\\geq2$$ or\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}\\geq\\frac{6}{5}.$$\nLet $k=3\\sqrt2-6$. Hence, by C-S\n$$\\sum_{cyc}\\frac{(a-3)^2}{9+a^2}=\\sum_{cyc}\\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\\geq\\frac{\\left(\\sum\\limits_{cyc}(a-3)(a+k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}=\\frac{\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2}{\\sum\\limits_{cyc}(a+k)^2(a^2+9)}.$$\nThus, it remains to prove that\n$$5\\left(\\sum\\limits_{cyc}(a^2-3-2k)\\right)^2\\geq6\\sum\\limits_{cyc}(a+k)^2(a^2+9),$$\nwhich is fourth degree.</p>\n\n<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>\n\n<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>\n\n<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>\n\n<p>which happens for equality case of two variables.</p>\n\n<p>Let $b=a$ and $c=3-2a$. We obtain\n$$(a-1)^2(\\sqrt2a-3\\sqrt2+3)^2\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2132190,
"score": 2,
"text": "<p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>\n\n<p>We need to prove that $$\\sum_{cyc}\\frac{x}{x^2+(x+y+z)^2}\\leq\\frac{9}{10(x+y+z)}$$ or\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq0,$$\nwhich is $9w^6+A(u,v^2)w^3+B(u,v^2)\\geq0$, which says that\n$$9\\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\\geq$$\n$$\\geq\\frac{1}{81}\\left(\\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\\right)^2$$\nis a linear inequality of $w^3$.</p>\n\n<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives\n$$(x-1)^2(x+2)^2(79x^2+253x+253)\\geq0,$$\nwhich is obviously true.</p>\n\n<p>Done!</p>\n"
}
] | Question: Find the maximum value of the expression ${\frac {x}{1+x^2}} + {\frac {y}{1+y^2}}+{\frac {z}{1+z^2}}$.
Answer 1: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 2: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 3: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
Answer 4: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 5: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 6: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
Answer 7: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 8: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 9: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
Answer 10: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 11: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 12: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
Answer 13: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 14: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 15: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
Answer 16: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 17: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 18: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
Answer 19: <p>We use the method of <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="noreferrer">Lagrange Multipliers</a>.</p>
<p>First, we define the function to maximize as $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}$ and our constraint as $g(x,y,z)=x+y+z-1$.</p>
<p>Thus, the Lagrange function is:</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=f(x,y,z)+\lambda\cdot g(x,y,z)$$</p>
<p>$$\mathcal{L}(x,y,z,\lambda)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}+\lambda(x+y+z-1) \tag{1}$$</p>
<p>Where $\lambda$ is some scalar. This function's partial derivatives must equal all equal to zero:</p>
<p>$$\frac{\partial \mathcal{L}}{\partial x}=\frac{1-x^2}{(x^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial y}=\frac{1-y^2}{(y^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial z}=\frac{1-z^2}{(z^2+1)^2}+\lambda=0$$
$$\frac{\partial \mathcal{L}}{\partial \lambda}=x+y+z-1=0$$</p>
<p>Solving this system of equations will yield values for $x,y,z,\lambda$ which will either minimize or maximize $f(x,y,z)$.</p>
<p>After some tedious calculation, we notice that the solution which maximizes the value of $f(x,y,z)$ is:</p>
<p>$x=y=z=\frac{1}{3}$ and $\lambda=-0.72$, as you have correctly predicted.</p>
<p>Thus, the maximum value of $f(x,y,z)=\frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}=\frac{9}{10}$.</p>
Answer 20: <p>Let $x=\frac{a}{3}$, $y=\frac{b}{3}$ and $z=\frac{c}{3}$.</p>
<p>Hence, $a+b+c=3$ and we need to prove that $$\sum_{cyc}\frac{a}{9+a^2}\leq\frac{3}{10}$$ or
$$\sum_{cyc}\left(\frac{1}{10}-\frac{a}{9+a^2}\right)\geq0$$ or
$$\sum_{cyc}\frac{a^2-10a+9}{9+a^2}\geq0$$ or
$$\sum_{cyc}\left(\frac{a^2-10a+9}{9+a^2}+\frac{2}{3}\right)\geq2$$ or
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}\geq\frac{6}{5}.$$
Let $k=3\sqrt2-6$. Hence, by C-S
$$\sum_{cyc}\frac{(a-3)^2}{9+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(a+k)^2(9+a^2)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}=\frac{\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2}{\sum\limits_{cyc}(a+k)^2(a^2+9)}.$$
Thus, it remains to prove that
$$5\left(\sum\limits_{cyc}(a^2-3-2k)\right)^2\geq6\sum\limits_{cyc}(a+k)^2(a^2+9),$$
which is fourth degree.</p>
<p>Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative and $abc=w^3$.</p>
<p>Hence, the last inequality is a linear inequality of $w^3$, which says that</p>
<p>it's enough to prove the last inequality for en extremal value of $w^3$, </p>
<p>which happens for equality case of two variables.</p>
<p>Let $b=a$ and $c=3-2a$. We obtain
$$(a-1)^2(\sqrt2a-3\sqrt2+3)^2\geq0.$$
Done!</p>
Answer 21: <p>Let $x+y+z=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.</p>
<p>We need to prove that $$\sum_{cyc}\frac{x}{x^2+(x+y+z)^2}\leq\frac{9}{10(x+y+z)}$$ or
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq0,$$
which is $9w^6+A(u,v^2)w^3+B(u,v^2)\geq0$, which says that
$$9\prod_{cyc}(x^2+(x+y+z)^2)-10(x+y+z)\sum_{cyc}x(y^2+(x+y+z)^2)(z^2+(x+y+z)^2)\geq$$
$$\geq\frac{1}{81}\left(\sum_{cyc}(4x^3-3x^2y-3x^2z+2xyz)\right)^2$$
is a linear inequality of $w^3$.</p>
<p>Thus, by $uvw$ it remains to prove the last inequality for $y=z=1$, which gives
$$(x-1)^2(x+2)^2(79x^2+253x+253)\geq0,$$
which is obviously true.</p>
<p>Done!</p>
|
Solve the equation ${\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}} = 9x-3$ | 2,095,875 | 5 | https://math.stackexchange.com/questions/2095875/solve-the-equation-sqrt4x2-5x1-2-sqrt-x2-x1-9x-3 | <p>I tried factoring the expression inside the square root, but that does not seem to help. Squaring the equation makes it even more terrible.</p>
<p>Can anyone provide a hint about what should be done? </p>
| [
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
},
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
},
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
},
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
},
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
},
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
},
{
"answer_id": 2095905,
"score": 8,
"text": "<p>Notice that</p>\n\n<p>$$\\left({\\sqrt{4x^2 + 5x+1}} - 2{\\sqrt {x^2-x+1}}\\right) \\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) = 9x-3.$$\nThus, </p>\n\n<p>$$9x-3 = \\left(9x-3\\right)\\left({\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}}\\right) \\implies $$</p>\n\n<p>$$9x-3 = 0 \\implies x=1/3$$ or</p>\n\n<p>$${\\sqrt{4x^2 + 5x+1}} + 2{\\sqrt {x^2-x+1}} = 1.$$</p>\n\n<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that \n$$ 2{\\sqrt {x^2-x+1}} \\geq \\sqrt{3} > 1.$$</p>\n\n<p>Therefore $x =1/3$ is the only real solution.</p>\n"
},
{
"answer_id": 2095892,
"score": 4,
"text": "<p>Rewrite your equation so that there is only one square root on each side. We get\n$$\\sqrt{4x^2+5x+1}=2\\sqrt{x^2-x+1}+9x-3.$$\nSquaring both sides we get\n$$\n\\begin{align}\n4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\\sqrt{x^2-x+1}+(81x^2-54x+9)\\\\\n&=85x^2-58x+13+4(9x-3)\\sqrt{x^2-x+1}\\end{align}.$$\nWe get\n$$81x^2-63x+12=-4(9x-3)\\sqrt{x^2-x+1}.$$\nDividing by $3$, we get\n$$27x^2-21x+4=-4(3x-1)\\sqrt{x^2-x+1}.$$\nApply factoring at the LHS, we get\n$$(9x-4)(3x-1)=-(3x-1)\\cdot 4\\sqrt{x^2-x+1}$$\nHence,\n$$(9x-4)(3x-1)+(3x-1)\\cdot 4\\sqrt{x^2-x+1}=0.$$\nThus, \n$$(3x-1)\\cdot\\Big[9x-4)+ 4\\sqrt{x^2-x+1}\\Big]=0.$$\nEither\n$x=\\frac{1}{3}$ or\n$$9x-4=-4\\sqrt{x^2-x+1}.$$\nSquaring again to both sides, we get\n$$81x^2-72x+16=16(x^2-x+1).$$\nWe get\n$$65x^2-56x=0.$$\nEither $x=0$ or $x=\\frac{56}{65}.$ Only $x=\\frac{1}{3}$ satisfies the original equation.</p>\n"
},
{
"answer_id": 2095907,
"score": 2,
"text": "<p>Try to multiply both sides by $(\\sqrt{4x^2+5x+1}+2\\sqrt{x^2-x+1})$. </p>\n\n<p>In the left side you will get $ 9x-3$ by difference of squares.</p>\n"
}
] | Question: Solve the equation ${\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}} = 9x-3$
Answer 1: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 2: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 3: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
Answer 4: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 5: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 6: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
Answer 7: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 8: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 9: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
Answer 10: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 11: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 12: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
Answer 13: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 14: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 15: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
Answer 16: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 17: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 18: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
Answer 19: <p>Notice that</p>
<p>$$\left({\sqrt{4x^2 + 5x+1}} - 2{\sqrt {x^2-x+1}}\right) \left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) = 9x-3.$$
Thus, </p>
<p>$$9x-3 = \left(9x-3\right)\left({\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}}\right) \implies $$</p>
<p>$$9x-3 = 0 \implies x=1/3$$ or</p>
<p>$${\sqrt{4x^2 + 5x+1}} + 2{\sqrt {x^2-x+1}} = 1.$$</p>
<p>But the minimal value of $x^2-x+1$ is $3/4$, which implies that
$$ 2{\sqrt {x^2-x+1}} \geq \sqrt{3} > 1.$$</p>
<p>Therefore $x =1/3$ is the only real solution.</p>
Answer 20: <p>Rewrite your equation so that there is only one square root on each side. We get
$$\sqrt{4x^2+5x+1}=2\sqrt{x^2-x+1}+9x-3.$$
Squaring both sides we get
$$
\begin{align}
4x^2+5x+1&=4(x^2-x+1)+4(9x-3)\sqrt{x^2-x+1}+(81x^2-54x+9)\\
&=85x^2-58x+13+4(9x-3)\sqrt{x^2-x+1}\end{align}.$$
We get
$$81x^2-63x+12=-4(9x-3)\sqrt{x^2-x+1}.$$
Dividing by $3$, we get
$$27x^2-21x+4=-4(3x-1)\sqrt{x^2-x+1}.$$
Apply factoring at the LHS, we get
$$(9x-4)(3x-1)=-(3x-1)\cdot 4\sqrt{x^2-x+1}$$
Hence,
$$(9x-4)(3x-1)+(3x-1)\cdot 4\sqrt{x^2-x+1}=0.$$
Thus,
$$(3x-1)\cdot\Big[9x-4)+ 4\sqrt{x^2-x+1}\Big]=0.$$
Either
$x=\frac{1}{3}$ or
$$9x-4=-4\sqrt{x^2-x+1}.$$
Squaring again to both sides, we get
$$81x^2-72x+16=16(x^2-x+1).$$
We get
$$65x^2-56x=0.$$
Either $x=0$ or $x=\frac{56}{65}.$ Only $x=\frac{1}{3}$ satisfies the original equation.</p>
Answer 21: <p>Try to multiply both sides by $(\sqrt{4x^2+5x+1}+2\sqrt{x^2-x+1})$. </p>
<p>In the left side you will get $ 9x-3$ by difference of squares.</p>
|
Solve an equation with four nested radicals | 2,096,155 | 5 | https://math.stackexchange.com/questions/2096155/solve-an-equation-with-four-nested-radicals | <p>I have to solve the equation $\sqrt{x-{\sqrt{x-{\sqrt{x-{\sqrt{x-5}}}}}}}=5$.</p>
<p>Repeated squaring of both sides of the equation makes it very complex.</p>
<p>Is there any substitution or something similar which can simplify the problem ?</p>
<p>Does the problem have a solution if "generalized" as $\sqrt{x-{\sqrt{x-{\sqrt{x-{\sqrt{x-n}}}}}}}=n$ ?</p>
| [
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
},
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
},
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
},
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
},
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
},
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
},
{
"answer_id": 2096195,
"score": 6,
"text": "<p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\\geq 1$. Clearly, $x=\\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let\n$$ f_1(x)=\\sqrt{x-n},\\qquad f_2(x)=\\sqrt{x-f_1(x)},$$ \n$$f_3(x)=\\sqrt{x-f_2(x)},\\quad f_4(x)=\\sqrt{x-f_3(x)}$$\nand $I=(n,+\\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that\n$$ \\sqrt{y-n}-\\sqrt{x-n} = \\frac{y-x}{\\sqrt{y-n}+\\sqrt{x-n}} $$\nhas the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\\sqrt{x}$.\nIn particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>\n"
},
{
"answer_id": 2096221,
"score": 2,
"text": "<p>For the moment, forget the question and consider this equation. \n$$\n\\sqrt{x-p}=p\n$$\nIt is easy to see that :\n$$\nx-p=p^2\n\\\\\n\\implies x=p(p+1)\n$$\nNow consider :\n$$\nx-\\sqrt{x-p}=x-p=k\n\\\\\nx-\\sqrt{x-\\sqrt{x-p}}=x-\\sqrt{k}=x-\\sqrt{x-p}=x-p=p^2\n$$\nAnd so on. \nIt is easy to see that the solution of $x$ always remains $p(p+1)$. </p>\n"
}
] | Question: Solve an equation with four nested radicals
Answer 1: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 2: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
Answer 3: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 4: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
Answer 5: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 6: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
Answer 7: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 8: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
Answer 9: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 10: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
Answer 11: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 12: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
Answer 13: <p>Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the <em>only solution</em>. For simplicity, let
$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$
$$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$
and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that
$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$
has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.
In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted.</p>
Answer 14: <p>For the moment, forget the question and consider this equation.
$$
\sqrt{x-p}=p
$$
It is easy to see that :
$$
x-p=p^2
\\
\implies x=p(p+1)
$$
Now consider :
$$
x-\sqrt{x-p}=x-p=k
\\
x-\sqrt{x-\sqrt{x-p}}=x-\sqrt{k}=x-\sqrt{x-p}=x-p=p^2
$$
And so on.
It is easy to see that the solution of $x$ always remains $p(p+1)$. </p>
|
Decomposition of this partial function | 2,096,391 | 5 | https://math.stackexchange.com/questions/2096391/decomposition-of-this-partial-function | <p>I came across this
$$\int \frac{dx}{x(x^2+1)^2}$$</p>
<p>in "Method of partial functions" in my Calculus I book.
The thing is that, however, decomposing $\frac{1}{x(x^2+1)^2}$ would take long in this way:
$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>So is there a quicker or a more practical way that I can use? </p>
| [
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
},
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
},
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
},
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
},
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
},
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
},
{
"answer_id": 2096454,
"score": 4,
"text": "<p>Let's take the decomposition you give and see how quick we can go</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{A}{x}+\\frac{Bx+C}{x^2+1}+\\frac{Dx+E}{(x^2+1)^2}$$</p>\n\n<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>\n\n<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2}= \\frac{1}{x}+\\frac{Bx+C}{x^2+1}-\\frac{x}{(x^2+1)^2}$$</p>\n\n<p>This is equivalent to</p>\n\n<p>$$\\begin{align}{Bx+C\\over x^2+1}=&{1\\over x(x^2+1)^2}-{1\\over x}+{x\\over (x^2+1)^2}\\\\=&{(x^2+1)(1-1-x^2)\\over x(x^2+1)^2}\\\\=&-{x\\over x^2+1}\\end{align}$$</p>\n\n<p>Looks pretty quick to me</p>\n"
},
{
"answer_id": 2096442,
"score": 2,
"text": "<p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:\n$$\\int \\frac{x}{x^2(x^2+1)^2}dx$$</p>\n\n<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral \n$$\\frac{1}{2}\\int \\frac{1}{y(y+1)^2}dy$$\nSplitting this in partial fractions is more straightforward than what you had before:\n$$\\frac{1}{y(y+1)^2} = -\\frac{1}{(y+1)^2} -\\frac{1}{y+1} + \\frac{1}{y}$$\nSolving the integral then gives\n$$\\frac{1}{2}\\left(\\frac{1}{y+1} + \\log(y) -\\log(y+1)\\right)+c$$\nSubsituting $y = x^2$ then gives\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} + \\log(x^2) -\\log(x^2+1)\\right)+c$$\nwhich can be slightly simplified to\n$$\\frac{1}{2}\\left(\\frac{1}{x^2+1} -\\log(x^2+1)\\right)+ \\log(x)+c$$</p>\n"
},
{
"answer_id": 2096447,
"score": 1,
"text": "<p>Let $\\displaystyle \\mathcal{I} = \\int\\frac{1}{x(x^2+1)^2}dx$</p>\n\n<p>substitute $\\displaystyle x = \\frac{1}{t}$ and $\\displaystyle dx = -\\frac{1}{t^2}$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{t^5}{(1+t^2)^2}\\cdot \\frac{1}{t^2}dt = -\\frac{1}{2}\\int\\frac{t^2\\cdot 2t }{(1+t^2)^2}dt$</p>\n\n<p>put $1+t^2=u\\;,$ then $2tdt = du$</p>\n\n<p>So $\\displaystyle \\mathcal{I} = -\\int\\frac{(u-1)}{u^2}du = +\\int u^{-2}du+\\int \\frac{1}{u}du$</p>\n"
},
{
"answer_id": 2096519,
"score": 1,
"text": "<p>\\begin{equation}\n\\int \\frac{dx}{x(x^2+1)^2}\n\\end{equation}</p>\n\n<p>\\begin{equation}\n\\frac{1}{x(x+i)^2(x-i)^2}=\\dfrac{A}{x}+\\dfrac{B}{x+i}+\\dfrac{C}{x-i}+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\n\\end{equation}</p>\n\n<ol>\n<li>Let $x=0$. Then $A=\\dfrac{1}{(0+i)^2(0-i)^2}=1$\\</li>\n<li>Let $x=-i$. Then $B=\\dfrac{1}{-i(-i-i)^2}=-\\dfrac{1}{2}$\\</li>\n<li>Let $x=i$. Then $C=\\dfrac{1}{i(i+i)^2}=-\\dfrac{1}{2}$\\</li>\n</ol>\n\n<p>Therefore we have</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{1}{2}\\left(\\dfrac{1}{x+i}+\\dfrac{1}{x-i}\\right)+\\dfrac{D}{(x+i)^2}+\\dfrac{E}{(x-i)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\\\\n&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}\nNote that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>\n\n<p>Thus we only have to find the value of $E$.</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{x(x^2+1)^2}&=&\\dfrac{1}{x}-\\dfrac{x}{x^2+1}+\\dfrac{2Eix}{(x^2+1)^2}\n\\end{eqnarray}</p>\n\n<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>\n\n<p>\\begin{eqnarray}\n\\frac{1}{4}&=&1-\\dfrac{1}{2}+\\dfrac{E}{2}i\\\\\nE&=&\\dfrac{i}{2}\n\\end{eqnarray}</p>\n\n<p>Therefore\n\\begin{equation}\n\\frac{1}{x(x^2+1)^2}=\\dfrac{1}{x}-\\dfrac{x}{x^2+1}-\\dfrac{x}{(x^2+1)^2}\n\\end{equation}</p>\n"
},
{
"answer_id": 2097377,
"score": 1,
"text": "<p>This is not a complete answer, but rather a helpful observation.</p>\n\n<p>First note that the integrand is odd.</p>\n\n<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>\n\n<p>With this in mind, we proceed as follows:</p>\n\n<p>$$\\frac{1}{x(x^2+1)^2} = \\frac{a}{x} + \\frac{bx}{x^2+1}+\\frac{cx}{(x^2+1)^2}$$</p>\n\n<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>\n"
}
] | Question: Decomposition of this partial function
Answer 1: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 2: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 3: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 4: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 5: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
Answer 6: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 7: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 8: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 9: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 10: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
Answer 11: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 12: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 13: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 14: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 15: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
Answer 16: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 17: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 18: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 19: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 20: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
Answer 21: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 22: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 23: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 24: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 25: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
Answer 26: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 27: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 28: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 29: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 30: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
Answer 31: <p>Let's take the decomposition you give and see how quick we can go</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{A}{x}+\frac{Bx+C}{x^2+1}+\frac{Dx+E}{(x^2+1)^2}$$</p>
<p>Multiplying by $x$ the two sides and making $x=0$ one gets $A=1$.</p>
<p>Multiplying by $(x^2+1)^2$ and making $x=i$ we get $-i=Di+E$ and this means $D=-1$ and $E=0$ . Now rewrite the decomposition</p>
<p>$$\frac{1}{x(x^2+1)^2}= \frac{1}{x}+\frac{Bx+C}{x^2+1}-\frac{x}{(x^2+1)^2}$$</p>
<p>This is equivalent to</p>
<p>$$\begin{align}{Bx+C\over x^2+1}=&{1\over x(x^2+1)^2}-{1\over x}+{x\over (x^2+1)^2}\\=&{(x^2+1)(1-1-x^2)\over x(x^2+1)^2}\\=&-{x\over x^2+1}\end{align}$$</p>
<p>Looks pretty quick to me</p>
Answer 32: <p>One way to simplify this a bit, is to multiply numerator and denominator of your integrand by $x$:
$$\int \frac{x}{x^2(x^2+1)^2}dx$$</p>
<p>Now substituting $y = x^2$; $dy = 2x dx$, you get the integral
$$\frac{1}{2}\int \frac{1}{y(y+1)^2}dy$$
Splitting this in partial fractions is more straightforward than what you had before:
$$\frac{1}{y(y+1)^2} = -\frac{1}{(y+1)^2} -\frac{1}{y+1} + \frac{1}{y}$$
Solving the integral then gives
$$\frac{1}{2}\left(\frac{1}{y+1} + \log(y) -\log(y+1)\right)+c$$
Subsituting $y = x^2$ then gives
$$\frac{1}{2}\left(\frac{1}{x^2+1} + \log(x^2) -\log(x^2+1)\right)+c$$
which can be slightly simplified to
$$\frac{1}{2}\left(\frac{1}{x^2+1} -\log(x^2+1)\right)+ \log(x)+c$$</p>
Answer 33: <p>Let $\displaystyle \mathcal{I} = \int\frac{1}{x(x^2+1)^2}dx$</p>
<p>substitute $\displaystyle x = \frac{1}{t}$ and $\displaystyle dx = -\frac{1}{t^2}$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{t^5}{(1+t^2)^2}\cdot \frac{1}{t^2}dt = -\frac{1}{2}\int\frac{t^2\cdot 2t }{(1+t^2)^2}dt$</p>
<p>put $1+t^2=u\;,$ then $2tdt = du$</p>
<p>So $\displaystyle \mathcal{I} = -\int\frac{(u-1)}{u^2}du = +\int u^{-2}du+\int \frac{1}{u}du$</p>
Answer 34: <p>\begin{equation}
\int \frac{dx}{x(x^2+1)^2}
\end{equation}</p>
<p>\begin{equation}
\frac{1}{x(x+i)^2(x-i)^2}=\dfrac{A}{x}+\dfrac{B}{x+i}+\dfrac{C}{x-i}+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}
\end{equation}</p>
<ol>
<li>Let $x=0$. Then $A=\dfrac{1}{(0+i)^2(0-i)^2}=1$\</li>
<li>Let $x=-i$. Then $B=\dfrac{1}{-i(-i-i)^2}=-\dfrac{1}{2}$\</li>
<li>Let $x=i$. Then $C=\dfrac{1}{i(i+i)^2}=-\dfrac{1}{2}$\</li>
</ol>
<p>Therefore we have</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{1}{2}\left(\dfrac{1}{x+i}+\dfrac{1}{x-i}\right)+\dfrac{D}{(x+i)^2}+\dfrac{E}{(x-i)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{D(x^2-2ix-1)+E(x^2+2ix-1)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{(D+E)x^2+2(E-D)ix-(D+E)}{(x^2+1)^2}\\
&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}
Note that since the $x^2$ term must vanish it must be the case that $D=-E$.</p>
<p>Thus we only have to find the value of $E$.</p>
<p>\begin{eqnarray}
\frac{1}{x(x^2+1)^2}&=&\dfrac{1}{x}-\dfrac{x}{x^2+1}+\dfrac{2Eix}{(x^2+1)^2}
\end{eqnarray}</p>
<p>This equation holds for every value of $x$ with the exception of $x=0,i,-i$. Therefore, when $x=1$ it is true that</p>
<p>\begin{eqnarray}
\frac{1}{4}&=&1-\dfrac{1}{2}+\dfrac{E}{2}i\\
E&=&\dfrac{i}{2}
\end{eqnarray}</p>
<p>Therefore
\begin{equation}
\frac{1}{x(x^2+1)^2}=\dfrac{1}{x}-\dfrac{x}{x^2+1}-\dfrac{x}{(x^2+1)^2}
\end{equation}</p>
Answer 35: <p>This is not a complete answer, but rather a helpful observation.</p>
<p>First note that the integrand is odd.</p>
<p>Secondly, note that for real factorisations, there is no conjugation from negative terms that can exist (e.g. $1-x$ and $1+x$)</p>
<p>With this in mind, we proceed as follows:</p>
<p>$$\frac{1}{x(x^2+1)^2} = \frac{a}{x} + \frac{bx}{x^2+1}+\frac{cx}{(x^2+1)^2}$$</p>
<p>This reduces the $5$ variable problem into a $3$ variable problem, which is much more manageable.</p>
|
proving that elements of a sequence are pairwise coprime | 2,096,439 | 5 | https://math.stackexchange.com/questions/2096439/proving-that-elements-of-a-sequence-are-pairwise-coprime | <p>For the sequence $a_n = 6^{2^n} + 1$: </p>
<p>How can I prove that the elements of this sequence are pairwise coprime, i.e. prove that if $m$ is not equal to $n$ then $\gcd(a_m, a_n) = 1$.
(begin by proving that $a_n∣ (a_{n+1} - 2)$)</p>
| [
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
},
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
},
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
},
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
},
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
},
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
},
{
"answer_id": 2096444,
"score": 7,
"text": "<p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>\n\n<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>\n\n<p>The remainder $r$ can be calculated as\n$$\n6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \\equiv (-1)^{2^{n-m}} + 1 = 2 \\pmod{6^{2^{m}} + 1}.\n$$</p>\n\n<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>\n"
},
{
"answer_id": 2096496,
"score": 2,
"text": "<p>Note that $\\, \\ a_{\\large n+1}\\! = (6^{\\large 2^{\\Large n}})^{\\large 2}+1 = (a_{\\large n}\\!-\\!1)^{\\large 2}+1= f(a_{\\large n}),\\ \\ f(x) = (x\\!-\\!1)^2+1$</p>\n\n<p>${\\rm mod}\\ a_{\\large n}\\!:\\,\\ \\color{#c00}{a_{\\large n+1}}= \\ \\ f(a_{\\large n})\\ \\ \\equiv f(0) \\equiv \\color{#c00}2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ \\color{#0a0}{a_{\\large n+2}}= f(\\color{#c00}{a_{\\large n+1}})\\equiv f(\\color{#c00}2) \\equiv\\color{#0a0} 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\,\\ \\ a_{\\large n+3}= f(\\color{#0a0}{a_{\\large n+2}})\\equiv f(\\color{#0a0}2) \\equiv 2$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}\\!:}\\qquad\\quad\\ \\vdots$<br>\n$\\phantom{{\\rm mod}\\ a_{\\large n}} a_{\\large n+k+1}= f(a_{\\large n+k})\\equiv f(2) \\equiv 2$ </p>\n\n<p>$\\ \\ $ i.e. $\\,\\quad a_{\\large n+k+1}\\! =\\! f^{\\large k}(\\color{#c00}{a_{n+1}})\\!\\equiv\\! f^{\\large k}(\\color{#c00}2)\\equiv 2\\ $ because $\\,2\\,$ is a <strong>fixed point</strong> of $f,\\,$ i.e. $\\,f(2) = 2$</p>\n"
},
{
"answer_id": 2096471,
"score": 1,
"text": "<p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\\cdots a_n$$\nHence $\\gcd(a_{n+1},a_k)\\leq 2$ where $k=1,2,\\cdots ,n$ and $\\gcd(a_{n+1},a_k)\\not= 2$ since non of $a_n$ are even.</p>\n"
}
] | Question: proving that elements of a sequence are pairwise coprime
Answer 1: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 2: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 3: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
Answer 4: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 5: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 6: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
Answer 7: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 8: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 9: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
Answer 10: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 11: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 12: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
Answer 13: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 14: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 15: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
Answer 16: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 17: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 18: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
Answer 19: <p>Suppose the prime $p$ divides $a_{n}$ and $a_{m}$, for $n > m$.</p>
<p>Then $p$ divides the remainder $r$ of the division of $a_{n}$ by $a_{m}$, as $r = a_{n} - a_{m} q$ for some $q$.</p>
<p>The remainder $r$ can be calculated as
$$
6^{2^{n}} + 1 = (6^{2^{m}})^{2^{n-m}} + 1 \equiv (-1)^{2^{n-m}} + 1 = 2 \pmod{6^{2^{m}} + 1}.
$$</p>
<p>So $r = 2$, and $p$ could only be $2$, whereas the $a_{n}$ are all odd.</p>
Answer 20: <p>Note that $\, \ a_{\large n+1}\! = (6^{\large 2^{\Large n}})^{\large 2}+1 = (a_{\large n}\!-\!1)^{\large 2}+1= f(a_{\large n}),\ \ f(x) = (x\!-\!1)^2+1$</p>
<p>${\rm mod}\ a_{\large n}\!:\,\ \color{#c00}{a_{\large n+1}}= \ \ f(a_{\large n})\ \ \equiv f(0) \equiv \color{#c00}2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ \color{#0a0}{a_{\large n+2}}= f(\color{#c00}{a_{\large n+1}})\equiv f(\color{#c00}2) \equiv\color{#0a0} 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\,\ \ a_{\large n+3}= f(\color{#0a0}{a_{\large n+2}})\equiv f(\color{#0a0}2) \equiv 2$<br>
$\phantom{{\rm mod}\ a_{\large n}\!:}\qquad\quad\ \vdots$<br>
$\phantom{{\rm mod}\ a_{\large n}} a_{\large n+k+1}= f(a_{\large n+k})\equiv f(2) \equiv 2$ </p>
<p>$\ \ $ i.e. $\,\quad a_{\large n+k+1}\! =\! f^{\large k}(\color{#c00}{a_{n+1}})\!\equiv\! f^{\large k}(\color{#c00}2)\equiv 2\ $ because $\,2\,$ is a <strong>fixed point</strong> of $f,\,$ i.e. $\,f(2) = 2$</p>
Answer 21: <p>$$a_{n+1}-2=6^{2^{n+1}}-1=(6^{2^n}-1)(6^{2^n}+1)=(6^{2^{n-1}}-1)(6^{2^{n-1}}+1)(6^{2^{n}}+1)=(6^{2^0}-1)(6^{2^{0}}+1)(6^{2^{1}}+1)\cdots(6^{2^{n-1}}+1)(6^{2^n}+1)=5a_1a_2\cdots a_n$$
Hence $\gcd(a_{n+1},a_k)\leq 2$ where $k=1,2,\cdots ,n$ and $\gcd(a_{n+1},a_k)\not= 2$ since non of $a_n$ are even.</p>
|
the value of $\frac{100^2}{100!}+\sum^{100}_{k=1}|(k^2-3k+1)S_{k}|$ is | 2,096,441 | 5 | https://math.stackexchange.com/questions/2096441/the-value-of-frac1002100-sum100-k-1k2-3k1s-k-is | <p>Let $S_{k},$ where $k=1,2,3,\cdots \cdots ,100$ denote the sum of the infinite geometric series whose first term is $\displaystyle \frac{k-1}{k!}$ and the common ratio is $\displaystyle \frac{1}{k},$ then the value of $\displaystyle \frac{100^2}{100!}+\sum^{100}_{k=1}|(k^2-3k+1)S_{k}|$ is</p>
<p>$\displaystyle S_{k} = \frac{a}{1-r} = \frac{(k-1)\cdot k}{k!\cdot (k-1)} = \frac{1}{(k-1)!}$</p>
<p>So $\displaystyle \sum^{100}_{k=1}\bigg|(k^2-3k+1)S_{k}\bigg| = \sum^{100}_{k=1}\bigg|\frac{(k-1)^2-k}{(k-1)!}\bigg| = \sum^{100}_{k=1}\bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg| $</p>
<p>wan,t be able to go further, could some help me with this, thanks</p>
| [
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
},
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
},
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
},
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
},
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
},
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
},
{
"answer_id": 2096450,
"score": 2,
"text": "<p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>\n"
},
{
"answer_id": 2096578,
"score": 2,
"text": "<p>$$\\frac{k-1}{(k-2)!}-\\frac{k}{k!}=\\frac{(k - 2) k^2}{k!}>0 \\implies \\bigg|\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}\\bigg|=\\frac{(k-1)}{(k-2)!}-\\frac{k}{(k-1)!}$$</p>\n\n<p>Then as @Archis said use telescoping sum.</p>\n"
}
] | Question: the value of $\frac{100^2}{100!}+\sum^{100}_{k=1}|(k^2-3k+1)S_{k}|$ is
Answer 1: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 2: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
Answer 3: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 4: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
Answer 5: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 6: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
Answer 7: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 8: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
Answer 9: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 10: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
Answer 11: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 12: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
Answer 13: <p>Hint thats a telescpoic series thus only first and last terms survive so summation is $|1-\frac {100}{99!}| $ also starting series with $k=2$ makes sense as i dont know whether value of $(-1)!$ is defined</p>
Answer 14: <p>$$\frac{k-1}{(k-2)!}-\frac{k}{k!}=\frac{(k - 2) k^2}{k!}>0 \implies \bigg|\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}\bigg|=\frac{(k-1)}{(k-2)!}-\frac{k}{(k-1)!}$$</p>
<p>Then as @Archis said use telescoping sum.</p>
|
How to evaluate this sum $\sum\limits_{i=1}^{n} \left \lfloor \sqrt{2} \cdot i \right \rfloor$ | 2,096,603 | 5 | https://math.stackexchange.com/questions/2096603/how-to-evaluate-this-sum-sum-limits-i-1n-left-lfloor-sqrt2-cdot-i | <p>I have the following sum that I need to evaluate:</p>
<p>$$\sum\limits_{i=1}^{n} \left \lfloor \sqrt{2} \cdot i \right \rfloor$$</p>
<p>If the floor wasn't there it would have been too easy $\sqrt2 \sum\limits_{i=1}^{n} i$, but with the floor added I have no clue how to approach it. Is this sum solveable and if so how do I evaluate it?</p>
| [
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
},
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
},
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
},
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
},
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
},
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
},
{
"answer_id": 2097910,
"score": 5,
"text": "<p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>\n\n<p>$$ \\sum_{i=1}^{10^{100}} i\\cdot\\sqrt 2 = $$ </p>\n\n<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>\n\n<p>Here's the python code:</p>\n\n<pre><code>import math\nimport time\n\n#\n# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)\n#\n\n# compare 2 numbers\ndef root2_cmp(x,y):\n x_a,x_b = x\n y_a,y_b = y\n\n a = x_a - y_a\n b = x_b - y_b\n\n a_sign = cmp(a,0)\n b_sign = cmp(b,0)\n\n if a_sign == b_sign: return a_sign\n\n if a_sign == 0: return b_sign\n if b_sign == 0: return a_sign\n\n # one of a_sign and b_sign is 1 and the other is -1\n # compare a^2 and (b*sqrt(2))^2 = 2*b^2\n\n aa = a*a\n bb2 = b*b*2\n\n # if a^2 > 2b^2, return a_sign otherwise the opposite sign\n return cmp(aa,bb2)*a_sign\n\ndef root2_max(x,y):\n return x if root2_cmp(x,y) > 0 else y\n\ndef root2_min(x,y):\n return x if root2_cmp(x,y) < 0 else y\n\ndef root2_add(x,y):\n return tuple(x_i+y_i for x_i,y_i in zip(x,y))\n\ndef root2_sub(x,y):\n return tuple(x_i-y_i for x_i,y_i in zip(x,y))\n\n# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)\n# Returns the same data for the concatenation of the sequences.\n\ndef seq_concat(a,b):\n a_lower, a_upper, a_offset, a_seq_data = a\n b_lower, b_upper, b_offset, b_seq_data = b\n\n c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))\n c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))\n\n # sanity check - should never be concatenating sequences that can't go together\n assert root2_cmp(c_upper, c_lower) > 0\n\n c_offset = root2_add(a_offset, b_offset)\n\n c_sequence = seq_data_concat(a_seq_data, b_seq_data)\n\n return c_lower, c_upper, c_offset, c_sequence\n\n# Takes 2 sequence data objects in the form (length, slope, y-intercept)\n# Returns the same data for the concatenation of the sequences.\ndef seq_data_concat(a,b):\n a_len, a_slope, a_intercept = a\n b_len, b_slope, b_intercept = b\n\n c_len = a_len + b_len\n c_slope = a_slope + b_slope\n c_intercept = a_slope*b_len + a_intercept + b_intercept\n\n return (c_len, c_slope, c_intercept)\n\n# Compute the sum for i from 1 to n of floor(i*sqrt(2))\ndef root2_floor_sum(n):\n seqs = [\n # lower upper offset seq_data (length, slope, y-intercept)\n ( (0,0), (2,-1), (-1,1), (1,1,1) ),\n ( (2,-1), (1,0), (-2,1), (1,2,2) )\n ]\n\n prev_seq = None\n cur_seq_data = (0,0,0)\n cur_offset = (0,0)\n\n # while current sequence length is below n\n while cur_seq_data[0] < n:\n # remove too-big sequences\n max_len = n - cur_seq_data[0]\n seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)\n\n matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)\n next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])\n\n if prev_seq != None:\n seq_to_add = seq_concat(prev_seq, next_seq)\n seqs.append(seq_to_add)\n\n next_lb, next_ub, next_off, next_seq_data = next_seq\n\n cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)\n cur_offset = root2_add(cur_offset, next_off)\n\n prev_seq = next_seq\n\n return cur_seq_data[2]\n\ndef verify_up_to(n):\n # verify the algorithm works for small n\n expected = 0\n root2 = math.sqrt(2)\n for i in range(1,n+1):\n expected += int(math.floor(root2*i))\n assert root2_floor_sum(i) == expected\n\nverify_up_to(1000)\n\nstart_time = time.time()\nresult = root2_floor_sum(10**100)\nend_time = time.time()\n\nprint result\nprint 'computed in',int((end_time - start_time)*1000),'ms'\n</code></pre>\n\n<p>Here's the output I get for the above program:</p>\n\n<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436\ncomputed in 889 ms\n</code></pre>\n\n<h2>Explanation of the algorithm</h2>\n\n<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \\lfloor n\\sqrt 2 \\rfloor - \\lfloor(n-1) \\sqrt 2 \\rfloor $.</p>\n\n<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>\n\n<p>Given this, we can compute the sum in question with $ \\sum_{i=1}^n i \\cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>\n\n<p>So we will only keep track of the following for the sequence $ a $:</p>\n\n<ul>\n<li><p>The length of the sequence $ len(a) $</p></li>\n<li><p>The function $ f_a(k) = \\sum_{i=1}^{len(a)} (i+k) \\cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>\n</ul>\n\n<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>\n\n<ul>\n<li><p>$ len(c) = len(a) + len(b) $</p></li>\n<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>\n\n<ul>\n<li>$ slope(c) = slope(a) + slope(b) $</li>\n<li>$ intercept(c) = slope(a) \\cdot len(b) + intercept(a) + intercept(b) $</li>\n</ul></li>\n</ul>\n\n<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>\n\n<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href=\"https://oeis.org/A006337\" rel=\"noreferrer\">look at it</a>).</p>\n\n<p>Imagine a line drawn from $ (n,\\sqrt n) $ to $ (m,\\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \\sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\\sqrt k) $ to $ (k+m-n, \\sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>\n\n<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>\n"
},
{
"answer_id": 2096734,
"score": 1,
"text": "<p>As you said, if you ignored the floor and looked at the sum $\\sum_{i=1}^n \\sqrt{2} i = \\sqrt{2} \\sum_{i=1}^n i$ you could easily compute the answer as $\\sqrt{2} \\frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>\n\n<p>On the other hand we know $\\lfloor \\sqrt{2} i \\rfloor > \\sqrt{2}i - 1$. So we could get a lower bound by taking $\\sum_{i=1}^n (\\sqrt{2} i - 1) = \\sqrt{2} \\frac{n(n+1)}{2} - n$.</p>\n\n<p>The point being that both of these bounds look like $\\frac{\\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\\frac{\\sqrt{2}}{2} \\times 10^{200}$. </p>\n"
}
] | Question: How to evaluate this sum $\sum\limits_{i=1}^{n} \left \lfloor \sqrt{2} \cdot i \right \rfloor$
Answer 1: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 2: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
Answer 3: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 4: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
Answer 5: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 6: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
Answer 7: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 8: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
Answer 9: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 10: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
Answer 11: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 12: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
Answer 13: <p>I've found a fast enough algorithm to compute the exact value for $ n = 10^{100} $:</p>
<p>$$ \sum_{i=1}^{10^{100}} i\cdot\sqrt 2 = $$ </p>
<p>$$ 70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436 $$</p>
<p>Here's the python code:</p>
<pre><code>import math
import time
#
# Helpers for numbers in the form a + b*sqrt(2), encoded as a tuple (a,b)
#
# compare 2 numbers
def root2_cmp(x,y):
x_a,x_b = x
y_a,y_b = y
a = x_a - y_a
b = x_b - y_b
a_sign = cmp(a,0)
b_sign = cmp(b,0)
if a_sign == b_sign: return a_sign
if a_sign == 0: return b_sign
if b_sign == 0: return a_sign
# one of a_sign and b_sign is 1 and the other is -1
# compare a^2 and (b*sqrt(2))^2 = 2*b^2
aa = a*a
bb2 = b*b*2
# if a^2 > 2b^2, return a_sign otherwise the opposite sign
return cmp(aa,bb2)*a_sign
def root2_max(x,y):
return x if root2_cmp(x,y) > 0 else y
def root2_min(x,y):
return x if root2_cmp(x,y) < 0 else y
def root2_add(x,y):
return tuple(x_i+y_i for x_i,y_i in zip(x,y))
def root2_sub(x,y):
return tuple(x_i-y_i for x_i,y_i in zip(x,y))
# Takes 2 sequence objects in the form (lower_bound, upper_bound, offset, sequence_data)
# Returns the same data for the concatenation of the sequences.
def seq_concat(a,b):
a_lower, a_upper, a_offset, a_seq_data = a
b_lower, b_upper, b_offset, b_seq_data = b
c_lower = root2_max(a_lower, root2_sub(b_lower, a_offset))
c_upper = root2_min(a_upper, root2_sub(b_upper, a_offset))
# sanity check - should never be concatenating sequences that can't go together
assert root2_cmp(c_upper, c_lower) > 0
c_offset = root2_add(a_offset, b_offset)
c_sequence = seq_data_concat(a_seq_data, b_seq_data)
return c_lower, c_upper, c_offset, c_sequence
# Takes 2 sequence data objects in the form (length, slope, y-intercept)
# Returns the same data for the concatenation of the sequences.
def seq_data_concat(a,b):
a_len, a_slope, a_intercept = a
b_len, b_slope, b_intercept = b
c_len = a_len + b_len
c_slope = a_slope + b_slope
c_intercept = a_slope*b_len + a_intercept + b_intercept
return (c_len, c_slope, c_intercept)
# Compute the sum for i from 1 to n of floor(i*sqrt(2))
def root2_floor_sum(n):
seqs = [
# lower upper offset seq_data (length, slope, y-intercept)
( (0,0), (2,-1), (-1,1), (1,1,1) ),
( (2,-1), (1,0), (-2,1), (1,2,2) )
]
prev_seq = None
cur_seq_data = (0,0,0)
cur_offset = (0,0)
# while current sequence length is below n
while cur_seq_data[0] < n:
# remove too-big sequences
max_len = n - cur_seq_data[0]
seqs = filter(lambda (lb,ub,off,seq_data): seq_data[0] <= max_len, seqs)
matching_seqs = filter(lambda (lb,ub,off,seq_data): root2_cmp(cur_offset, lb) >= 0 and root2_cmp(cur_offset, ub) < 0, seqs)
next_seq = max(matching_seqs, key=lambda (lb,ub,off,seq_data): seq_data[0])
if prev_seq != None:
seq_to_add = seq_concat(prev_seq, next_seq)
seqs.append(seq_to_add)
next_lb, next_ub, next_off, next_seq_data = next_seq
cur_seq_data = seq_data_concat(cur_seq_data, next_seq_data)
cur_offset = root2_add(cur_offset, next_off)
prev_seq = next_seq
return cur_seq_data[2]
def verify_up_to(n):
# verify the algorithm works for small n
expected = 0
root2 = math.sqrt(2)
for i in range(1,n+1):
expected += int(math.floor(root2*i))
assert root2_floor_sum(i) == expected
verify_up_to(1000)
start_time = time.time()
result = root2_floor_sum(10**100)
end_time = time.time()
print result
print 'computed in',int((end_time - start_time)*1000),'ms'
</code></pre>
<p>Here's the output I get for the above program:</p>
<pre><code>70710678118654752440084436210484903928483593768847403658833986899536623923105351942519376716382078638821760123411090095254685423841027253480565451739737157454059823250037671948325191776995310741236436
computed in 889 ms
</code></pre>
<h2>Explanation of the algorithm</h2>
<p>First, consider the sequence of differences of successive terms in this sum $ a_n = \lfloor n\sqrt 2 \rfloor - \lfloor(n-1) \sqrt 2 \rfloor $.</p>
<p>This sequence is all $ 1 $s and $ 2 $s: $ 1,1,2,1,2,1,1,2,1, ... $</p>
<p>Given this, we can compute the sum in question with $ \sum_{i=1}^n i \cdot a_{n-i+1} $. But this is obviously too slow with $ n = 10^{100} $. So rather than computing the sequence directly, we want to compute some info about it that still lets us compute the sum.</p>
<p>So we will only keep track of the following for the sequence $ a $:</p>
<ul>
<li><p>The length of the sequence $ len(a) $</p></li>
<li><p>The function $ f_a(k) = \sum_{i=1}^{len(a)} (i+k) \cdot a_{len(a)-i+1} $. Then $ f_a(0) $ is the sum described above. You can think of this as being the above sum, but with $ a $ having $ k $ zeros appended to it. Because $ f_a $ is a finite sum of linear functions, it is linear. Therefore it is uniquely defined by its slope, $ slope(a) $ and $y$-intercept, $ intercept(a) $</p></li>
</ul>
<p>Now suppose we have the above info for sequences $ a $ and $ b $. We can compute the same info for the concatenation of these sequences, $ c $.</p>
<ul>
<li><p>$ len(c) = len(a) + len(b) $</p></li>
<li><p>$ f_c(k) = f_a(k + len(b)) + f_b(k) $, giving us</p>
<ul>
<li>$ slope(c) = slope(a) + slope(b) $</li>
<li>$ intercept(c) = slope(a) \cdot len(b) + intercept(a) + intercept(b) $</li>
</ul></li>
</ul>
<p>So this means that we can build the needed info for the whole sequence by combining info of smaller sequences.</p>
<p>Next, we need the fact that this sequence repeats itself a lot (I mean <a href="https://oeis.org/A006337" rel="noreferrer">look at it</a>).</p>
<p>Imagine a line drawn from $ (n,\sqrt n) $ to $ (m,\sqrt m) $ with $ m > n $. Between each time this passes through a vertical line, it passes through either $ 1 $ or $ 2 $ horizontal lines. These $ 1 $s and $2$s are exactly the sequence of differences of terms we defined above at indices $ n $ through $m$. Because of the irrationality of $ \sqrt 2 $, we can move this line around slightly and still get the same sequences. If we've computed the sequence up to some $ k $ and find that the fractional part of the $ k $th term in the original sum, $ frac(m*sqrt(2)) $ is close to the fractional part of the $ n $th term, we can translate the line above to $ (k,\sqrt k) $ to $ (k+m-n, \sqrt {k+m-n}) $ and this won't change the sequence associated with it.</p>
<p>So the algorithm works by building bigger and bigger lines described above by appending them together, remembering the info described above about the sequences associated with the lines (length, slope, intercept).</p>
Answer 14: <p>As you said, if you ignored the floor and looked at the sum $\sum_{i=1}^n \sqrt{2} i = \sqrt{2} \sum_{i=1}^n i$ you could easily compute the answer as $\sqrt{2} \frac{n(n+1)}{2}$. Taking floors clearly lowers this, but this is an upper bound.</p>
<p>On the other hand we know $\lfloor \sqrt{2} i \rfloor > \sqrt{2}i - 1$. So we could get a lower bound by taking $\sum_{i=1}^n (\sqrt{2} i - 1) = \sqrt{2} \frac{n(n+1)}{2} - n$.</p>
<p>The point being that both of these bounds look like $\frac{\sqrt{2}}{2} n^2 + O(n)$. So for $n=10^{100}$ we can get about 100 digits of accuracy by the approximation $\frac{\sqrt{2}}{2} \times 10^{200}$. </p>
|
$f \in C^2(\mathbb{R}^2)$ is harmonic; $f$ is in one $L^p(\mathbb{R}^2)$ $\implies$ $f$ is constant | 2,096,868 | 5 | https://math.stackexchange.com/questions/2096868/f-in-c2-mathbbr2-is-harmonic-f-is-in-one-lp-mathbbr2-impli | <p>Suppose $f \in C^2(\mathbb{R}^2)$ is harmonic. Suppose that $f$ is in one $L^p(\mathbb{R}^2)$ space, with $1 \le p \le \infty$. </p>
<p>How can I prove that $f$ is constant and in particular is identically $0$ if $1 \le p < \infty$ ?</p>
| [
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
},
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
},
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
},
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
},
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
},
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
},
{
"answer_id": 2102266,
"score": 2,
"text": "<p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>\n\n<p>$$u(x) = \\frac{1}{V(B(x,r))}\\int_{B(x,r)} u\\,dV,$$</p>\n\n<p>where $V$ is Lebesgue volume measure on $\\mathbb R^n.$ This implies</p>\n\n<p>$$ |u(x)| \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|\\, dV.$$</p>\n\n<p>For $1\\le p <\\infty,$ apply Jensen's inequality to see</p>\n\n<p>$$\\tag 1 |u(x)|^p \\le \\frac{1}{V(B(x,r))}\\int_{B(x,r)} |u|^p\\, dV.$$</p>\n\n<p>Recall this holds for any $r>0.$ As $r\\to \\infty,$ $V(B(x,r)) \\to \\infty,$ while the integral in $(1)$ is bounded above by $ \\int_{\\mathbb R ^n} |u|^p\\, dV < \\infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\\mathbb R^n,$ we see $u\\equiv 0$ as desired.</p>\n"
},
{
"answer_id": 2096968,
"score": 1,
"text": "<p>I'll present the proof with $1 \\le p < \\infty$ since the case $p=\\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\\mathbb{R}^n$ for $n\\ge 2$ rather than $n=2$, as you ask about.</p>\n\n<p>The key is to use the mean-value property. It shows that for $x \\in \\mathbb{R}^n$ and $r>0$ \n$$\nu(x) = \\frac{1}{\\omega_n r^{n}} \\int_{ B(x,r)} u(y) dy \n$$\nwhere $\\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>\n$$\n|u(x)| \\le \\frac{1}{\\omega_n r^n} \\int_{B(x,r)} |u(y)| dy \\le \\frac{1}{\\omega_n r^n} (\\omega_n r^n)^{1/p'} \\Vert u \\Vert_{L^p(B(x,r))} \\\\\n= \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(B(x,r))} \n$$\nwhere $1/p + 1/p' =1$. Since $u$ is such that \n$$\n\\Vert u \\Vert_{L^p(\\mathbb{R}^n)} < \\infty,\n$$\nwe plug in to see that\n$$\n|u(x)| \\le \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}.\n$$\nSending $r \\to \\infty$ then shows that \n$$\n|u(x)| \\le \\lim_{r \\to \\infty} \\frac{1}{\\omega_n^{1/p} r^{n/p}} \\Vert u \\Vert_{L^p(\\mathbb{R}^n)}=0,\n$$\nand so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\\mathbb{R}^n)$ for $1 \\le p < \\infty$ is identically $0$.</p>\n"
},
{
"answer_id": 2102258,
"score": 1,
"text": "<p>Since $f\\in L^p$, we have that $f$ is a tempered distribution. Notice that $\\Delta f =0$. Then $|\\xi|^2\\widehat f =0$, which implies that $\\widehat f$ has support contained in {0}. Any distribution of support in $\\{0\\}$ is of the form\n$$\\widehat f = \\sum_{|\\alpha|\\leq d}c_\\alpha \\partial^{\\alpha}\\delta$$\nwhere $d$ is a non negative integer and $c_\\alpha \\in \\mathbb C$.</p>\n\n<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>\n\n<p>But the only polynomial function in $L^p$ is $f=0$. </p>\n"
}
] | Question: $f \in C^2(\mathbb{R}^2)$ is harmonic; $f$ is in one $L^p(\mathbb{R}^2)$ $\implies$ $f$ is constant
Answer 1: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 2: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 3: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
Answer 4: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 5: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 6: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
Answer 7: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 8: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 9: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
Answer 10: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 11: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 12: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
Answer 13: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 14: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 15: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
Answer 16: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 17: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 18: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
Answer 19: <p>I'm going to add to the proof by @Glitch because I think it can be simplified. Starting with the volume version of the mean value property (which follows from the spherical MVP), we have, for any $r>0,$</p>
<p>$$u(x) = \frac{1}{V(B(x,r))}\int_{B(x,r)} u\,dV,$$</p>
<p>where $V$ is Lebesgue volume measure on $\mathbb R^n.$ This implies</p>
<p>$$ |u(x)| \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|\, dV.$$</p>
<p>For $1\le p <\infty,$ apply Jensen's inequality to see</p>
<p>$$\tag 1 |u(x)|^p \le \frac{1}{V(B(x,r))}\int_{B(x,r)} |u|^p\, dV.$$</p>
<p>Recall this holds for any $r>0.$ As $r\to \infty,$ $V(B(x,r)) \to \infty,$ while the integral in $(1)$ is bounded above by $ \int_{\mathbb R ^n} |u|^p\, dV < \infty.$ It follows that $|u(x)|^p = 0,$ i.e., $u(x) = 0.$ Since $x$ was an arbitrary point in $\mathbb R^n,$ we see $u\equiv 0$ as desired.</p>
Answer 20: <p>I'll present the proof with $1 \le p < \infty$ since the case $p=\infty$ can be handled with Liouville's theorem. I'll also work more generally in $\mathbb{R}^n$ for $n\ge 2$ rather than $n=2$, as you ask about.</p>
<p>The key is to use the mean-value property. It shows that for $x \in \mathbb{R}^n$ and $r>0$
$$
u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy
$$
where $\omega_n = |B(0,1)|$. Then we use Holder's inequality:<br>
$$
|u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} (\omega_n r^n)^{1/p'} \Vert u \Vert_{L^p(B(x,r))} \\
= \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(B(x,r))}
$$
where $1/p + 1/p' =1$. Since $u$ is such that
$$
\Vert u \Vert_{L^p(\mathbb{R}^n)} < \infty,
$$
we plug in to see that
$$
|u(x)| \le \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}.
$$
Sending $r \to \infty$ then shows that
$$
|u(x)| \le \lim_{r \to \infty} \frac{1}{\omega_n^{1/p} r^{n/p}} \Vert u \Vert_{L^p(\mathbb{R}^n)}=0,
$$
and so $u(x) =0$ for all $x$. Thus the only harmonic function in $L^p(\mathbb{R}^n)$ for $1 \le p < \infty$ is identically $0$.</p>
Answer 21: <p>Since $f\in L^p$, we have that $f$ is a tempered distribution. Notice that $\Delta f =0$. Then $|\xi|^2\widehat f =0$, which implies that $\widehat f$ has support contained in {0}. Any distribution of support in $\{0\}$ is of the form
$$\widehat f = \sum_{|\alpha|\leq d}c_\alpha \partial^{\alpha}\delta$$
where $d$ is a non negative integer and $c_\alpha \in \mathbb C$.</p>
<p>Taking the inverse Fourier transform we conclude that $f$ is an polynomial function.</p>
<p>But the only polynomial function in $L^p$ is $f=0$. </p>
|
$\gcd(a, b) = \gcd(a, b-a^n)$ where $n\ge 1$ . Is this statement true? | 2,097,134 | 5 | https://math.stackexchange.com/questions/2097134/gcda-b-gcda-b-an-where-n-ge-1-is-this-statement-true | <p>From Euclid I know $\gcd(a,b) = \gcd(a, b-ka)$</p>
<p>I'm wondering if it is legal to replace $k$ by $a^{n-1}$ ?</p>
<p>I'm a bit skeptical here because $a$ is a variable... It's not like $k$ I guess.</p>
<p>Appreciate any help. Thank you!</p>
| [
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
},
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
},
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
},
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
},
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
},
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
},
{
"answer_id": 2097144,
"score": 6,
"text": "<p>Let $\\gcd (a,b)= m$\nand $\\gcd (a,b-a^n) = d$. \nSo,\n$$ d\\big| a \\implies d\\big| a^n \\tag 1 $$\nAlso,\n$$d\\big|(b- a^n)\\tag 2 $$\nFrom $(1)$ and $(2)$, we get $d\\big| b$. Thus , $d\\big| a$ and $d\\big| b$. This implies that $$d| gcd (a,b) =m \\tag 3$$\nWe now show,$m\\big| d$. As $m\\big| a$ and $m\\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\\big|d \\tag 4$$\nFrom $(3)$ and $(4)$, we get $m=d$.</p>\n"
},
{
"answer_id": 2097139,
"score": 1,
"text": "<p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>\n"
}
] | Question: $\gcd(a, b) = \gcd(a, b-a^n)$ where $n\ge 1$ . Is this statement true?
Answer 1: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 2: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
Answer 3: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 4: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
Answer 5: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 6: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
Answer 7: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 8: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
Answer 9: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 10: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
Answer 11: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 12: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
Answer 13: <p>Let $\gcd (a,b)= m$
and $\gcd (a,b-a^n) = d$.
So,
$$ d\big| a \implies d\big| a^n \tag 1 $$
Also,
$$d\big|(b- a^n)\tag 2 $$
From $(1)$ and $(2)$, we get $d\big| b$. Thus , $d\big| a$ and $d\big| b$. This implies that $$d| gcd (a,b) =m \tag 3$$
We now show,$m\big| d$. As $m\big| a$ and $m\big| (b-a^n)$ and ,$gcd (a,b-a^n) = d$, thus, $$m\big|d \tag 4$$
From $(3)$ and $(4)$, we get $m=d$.</p>
Answer 14: <p>If $k$ can be any integer, then it is valid, seeing $a^{n-1}$ is also an integer.</p>
|
Why does the continued fraction of Champerowne's number have so large entries? | 2,097,748 | 5 | https://math.stackexchange.com/questions/2097748/why-does-the-continued-fraction-of-champerownes-number-have-so-large-entries | <p>If we calculate the continued fraction of Champerowne's number $$0.1234567891011121314151617181920\cdots $$ it turns out that it contains very large entries</p>
<blockquote>
<p>How can I understand this intuitively ? </p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be. </p>
<p>Many zeros (making the number look like to have a terminating decimal expansion) occur later, so they do not explain the large entries.</p>
<p>Any ideas ?</p>
| [
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
},
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
},
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
},
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
},
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
},
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
},
{
"answer_id": 2097764,
"score": 9,
"text": "<p>As a first-order approximation, it is very close to</p>\n\n<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \\dots = 0.123456790123\\dots $</p>\n\n<p>As a second,</p>\n\n<p>$ 1/9801 = 0.0001 + 0.000002 + \\dots = 0.0001020304050607080910111213\\dots $</p>\n\n<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>\n\n<p>$ 10/81-1002/980100000000 \\approx 0.12345678910111213\\dots96979900010203040506070809\\dots$</p>\n"
},
{
"answer_id": 4530009,
"score": 1,
"text": "<p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}.\n$$</span>\nThe inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>\n<p>Jeffrey Shallit, <a href=\"https://doi.org/10.1016/0022-314X(79)90040-4\" rel=\"nofollow noreferrer\">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>\n<p>Interestingly the Liouville constant and the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{3^n}},\n$$</span>\nlike the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n-n}}\n$$</span>\nhas large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of\n<span class=\"math-container\">$$\n\\sum_{n=0}^\\infty\\frac{1}{10^{2^n}}\n$$</span>\ndo not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>\n<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write\n<span class=\"math-container\">\\begin{align}\nC_{10} =& \\frac{10}{81} - \\left(\\frac{91}{81}-\\frac{991}{9801}\\right)\\times10^{-9}-\\left(\\frac{9901}{9801}-\\frac{99901}{998001}\\right)\\times10^{-189}\\\\\n&-\\left(\\frac{999001}{998001}-\\frac{9999001}{99980001}\\right)\\times10^{-2889}-\\left(\\frac{99990001}{99980001}-\\frac{999990001}{9999800001}\\right)\\times10^{-38889}+\\ldots.\n\\end{align}</span>\nOne can see that the first term of this series approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$10^{-9}$</span>, the sum of the first two terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-189}$</span>, the sum of the first three terms approximates <span class=\"math-container\">$C_{10}$</span> with an error of about <span class=\"math-container\">$0.9\\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class=\"math-container\">$C_{10}$</span>.</p>\n<p>To see where the series comes from, write the base-<span class=\"math-container\">$b$</span> Champernowne constant as a double sum,\n<span class=\"math-container\">$$\nC_b=\\sum_{n=1}^\\infty b^{-E_b(n-1)}\\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}},\n$$</span>\nwhere <span class=\"math-container\">$E_b(n)=\\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class=\"math-container\">$n$</span>-digit base-<span class=\"math-container\">$b$</span> numbers, giving, for example,\n<span class=\"math-container\">$$0.1+0.02+0.003+\\ldots+0.000000009=0.123456789$$</span>\nwhen <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=1$</span> and\n<span class=\"math-container\">$$0.10+0.0011+0.000012+\\ldots+0.\\underbrace{00\\ldots0}_{\\text{178 zeroes}}99=0.101112\\ldots9899$$</span> when <span class=\"math-container\">$b=10$</span> and <span class=\"math-container\">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class=\"math-container\">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class=\"math-container\">$b$</span> numbers of fewer than <span class=\"math-container\">$n$</span> digits.)</p>\n<p>Both the expression for <span class=\"math-container\">$E_b(n)$</span> and the inner sum of the expression for <span class=\"math-container\">$C_b$</span> are of the form <span class=\"math-container\">$C\\sum_{k=p}^q ka^k$</span>, which can be evaluated using\n<span class=\"math-container\">$$\n\\sum_{k=p}^q ka^k=a^p\\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\\frac{q+1-qa}{(1-a)^2}.\n$$</span>\nThis can be used to write\n<span class=\"math-container\">$$\nE_b(n) = nb^n-\\frac{b^n-1}{b-1}\n$$</span>\nand\n<span class=\"math-container\">$$\nb^{-E_b(n-1)} \\sum_{k=b^{n-1}}^{b^n-1}\\frac{k}{b^{n(k-b^{n-1}+1)}} = \\frac{b^{2n-1}-b^{n-1}+1}{\\left(b^n-1\\right)^2}b^{-E_b(n-1)}-\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2}b^{-E_b(n)},\n$$</span>\nfrom which we get\n<span class=\"math-container\">$$\nC_b = \\frac{b}{(b-1)^2}-\\sum_{n=1}^\\infty\\left(\\frac{b^{2n}-b^n+1}{\\left(b^n-1\\right)^2} - \\frac{b^{2n+1}-b^n+1}{\\left(b^{n+1}-1\\right)^2}\\right)b^{-E_b(n)}.\n$$</span>\nThis produces the series for <span class=\"math-container\">$C_{10}$</span> at the beginning of this post.</p>\n<p>The rational numbers one gets by truncating this series at <span class=\"math-container\">$n=N$</span> for some finite <span class=\"math-container\">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class=\"math-container\">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$N$</span> are both very small.</p>\n<p><strong>Remark on the intuition:</strong>\nI wanted to address this part of your post:</p>\n<blockquote>\n<blockquote>\n<p>How can I understand this intuitively ?</p>\n</blockquote>\n<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>\n</blockquote>\n<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?\n<span class=\"math-container\">\\begin{align}\n&0.01886792452830\\ldots\\\\\n&0.01818181818181\\ldots\n\\end{align}</span>\nYou can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class=\"math-container\">$\\frac{1}{53}$</span>, the second with <span class=\"math-container\">$\\frac{1}{55}$</span>. That the second appears to have a period of length <span class=\"math-container\">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>\n"
}
] | Question: Why does the continued fraction of Champerowne's number have so large entries?
Answer 1: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 2: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
Answer 3: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 4: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
Answer 5: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 6: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
Answer 7: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 8: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
Answer 9: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 10: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
Answer 11: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 12: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
Answer 13: <p>As a first-order approximation, it is very close to</p>
<p>$ 10/81 = 0.1 + 0.02 + 0.003 + \dots = 0.123456790123\dots $</p>
<p>As a second,</p>
<p>$ 1/9801 = 0.0001 + 0.000002 + \dots = 0.0001020304050607080910111213\dots $</p>
<p>so it should not come as a surprise that Champernowne's [sic] number is close to the sum of $10/81$ and another rational which is a multiple of $1/(9801\cdot10^8)$, the factor $10^8$ being needed to shift the $10111213\dots$ part 8 decimal places to the right, to leave the initial $0.12345678$ alone (the $9$ is already in the correct place w.r.t. the following $10$). In fact</p>
<p>$ 10/81-1002/980100000000 \approx 0.12345678910111213\dots96979900010203040506070809\dots$</p>
Answer 14: <p><strong>Warning:</strong> I am leaving the answer below while I try to repair it, but I now believe it to be seriously incomplete, if not misleading. It is certainly true that if the continued fraction of a constant contains extremely large terms, then the corresponding convergents will be exceptionally good rational approximations of the constant. But a constant having exceptionally good rational approximations does not imply that there will be large terms in its continued fraction, so merely noting the existence of these exceptionally good rational approximations is not a sufficient explanation for why the continued fraction has such large terms. A useful counterexample to keep in mind is the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}.
$$</span>
The inclusion of each successive term in this series doubles the number of correct digits in the decimal expansion, so just as I say in my answer with regard to the Champernowne constant, "Each additional term in this series provides an exponentially growing number of correct digits...." Nevertheless, the continued fraction contains no large terms. In fact, the terms have a simple pattern and, more importantly, are bounded, as was shown in</p>
<p>Jeffrey Shallit, <a href="https://doi.org/10.1016/0022-314X(79)90040-4" rel="nofollow noreferrer">Simple continued fractions for some irrational numbers</a>, <em>Journal of Number Theory</em> <strong>11</strong> (1979) 209-217.</p>
<p>Interestingly the Liouville constant and the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{3^n}},
$$</span>
like the Champernowne constant, exhibit the large term phenomenon in their continued fractions. All three continued fractions contain a sequence of terms that grows super-exponentially. The continued fraction of the constant
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n-n}}
$$</span>
has large terms as well, but they grow only exponentially, not super-exponentially. That the terms in the continued fraction of
<span class="math-container">$$
\sum_{n=0}^\infty\frac{1}{10^{2^n}}
$$</span>
do not grow at all indicates that the issues are rather subtle and that careful analysis is needed.</p>
<p><strong>The original answer:</strong> To expand a bit on Rosie F's answer, we can write
<span class="math-container">\begin{align}
C_{10} =& \frac{10}{81} - \left(\frac{91}{81}-\frac{991}{9801}\right)\times10^{-9}-\left(\frac{9901}{9801}-\frac{99901}{998001}\right)\times10^{-189}\\
&-\left(\frac{999001}{998001}-\frac{9999001}{99980001}\right)\times10^{-2889}-\left(\frac{99990001}{99980001}-\frac{999990001}{9999800001}\right)\times10^{-38889}+\ldots.
\end{align}</span>
One can see that the first term of this series approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$10^{-9}$</span>, the sum of the first two terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-189}$</span>, the sum of the first three terms approximates <span class="math-container">$C_{10}$</span> with an error of about <span class="math-container">$0.9\times10^{-2889}$</span>, and so on. Each additional term in this series provides an exponentially growing number of correct digits of <span class="math-container">$C_{10}$</span>.</p>
<p>To see where the series comes from, write the base-<span class="math-container">$b$</span> Champernowne constant as a double sum,
<span class="math-container">$$
C_b=\sum_{n=1}^\infty b^{-E_b(n-1)}\sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}},
$$</span>
where <span class="math-container">$E_b(n)=\sum_{k=1}^nk(b-1)b^{k-1}$</span>. Notice that this is just a direct translation of the definition of the constant: the inner sum runs over all the <span class="math-container">$n$</span>-digit base-<span class="math-container">$b$</span> numbers, giving, for example,
<span class="math-container">$$0.1+0.02+0.003+\ldots+0.000000009=0.123456789$$</span>
when <span class="math-container">$b=10$</span> and <span class="math-container">$n=1$</span> and
<span class="math-container">$$0.10+0.0011+0.000012+\ldots+0.\underbrace{00\ldots0}_{\text{178 zeroes}}99=0.101112\ldots9899$$</span> when <span class="math-container">$b=10$</span> and <span class="math-container">$n=2$</span>; the pre-factor of this sum just shifts the sum to the appropriate starting position <span class="math-container">$E_b(n-1)$</span> places to the right of the radix point. (This is the number of places occupied by base-<span class="math-container">$b$</span> numbers of fewer than <span class="math-container">$n$</span> digits.)</p>
<p>Both the expression for <span class="math-container">$E_b(n)$</span> and the inner sum of the expression for <span class="math-container">$C_b$</span> are of the form <span class="math-container">$C\sum_{k=p}^q ka^k$</span>, which can be evaluated using
<span class="math-container">$$
\sum_{k=p}^q ka^k=a^p\frac{p-(p-1)a}{(1-a)^2}-a^{q+1}\frac{q+1-qa}{(1-a)^2}.
$$</span>
This can be used to write
<span class="math-container">$$
E_b(n) = nb^n-\frac{b^n-1}{b-1}
$$</span>
and
<span class="math-container">$$
b^{-E_b(n-1)} \sum_{k=b^{n-1}}^{b^n-1}\frac{k}{b^{n(k-b^{n-1}+1)}} = \frac{b^{2n-1}-b^{n-1}+1}{\left(b^n-1\right)^2}b^{-E_b(n-1)}-\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2}b^{-E_b(n)},
$$</span>
from which we get
<span class="math-container">$$
C_b = \frac{b}{(b-1)^2}-\sum_{n=1}^\infty\left(\frac{b^{2n}-b^n+1}{\left(b^n-1\right)^2} - \frac{b^{2n+1}-b^n+1}{\left(b^{n+1}-1\right)^2}\right)b^{-E_b(n)}.
$$</span>
This produces the series for <span class="math-container">$C_{10}$</span> at the beginning of this post.</p>
<p>The rational numbers one gets by truncating this series at <span class="math-container">$n=N$</span> for some finite <span class="math-container">$N$</span> generally agree with the convergents one gets by truncating the continued fraction of <span class="math-container">$C_b$</span> just before one of the high water marks. (A <em>high water mark</em> is a coefficient in the continued fraction larger than all previous coefficients.) There are some exceptions when <span class="math-container">$b$</span> and <span class="math-container">$N$</span> are both very small.</p>
<p><strong>Remark on the intuition:</strong>
I wanted to address this part of your post:</p>
<blockquote>
<blockquote>
<p>How can I understand this intuitively ?</p>
</blockquote>
<p>The decimal representation of Champerowne's number must have parts which make the number look like a rational because of appearing periods, but it is not obvious where these (large) periods should be.</p>
</blockquote>
<p>Intuitively you can say that a real number looks rational if its decimal expansion agrees for many places with that of a rational number whose numerator and denominator have far fewer digits than has the string of places on which the agreement occurs. For example, which of the following two numbers look rational?
<span class="math-container">\begin{align}
&0.01886792452830\ldots\\
&0.01818181818181\ldots
\end{align}</span>
You can say that both look rational since they both agree to a lot of places with a rational number that has two-digit denominator, the first with <span class="math-container">$\frac{1}{53}$</span>, the second with <span class="math-container">$\frac{1}{55}$</span>. That the second appears to have a period of length <span class="math-container">$2$</span> does not make it closer to rational than the first. The Champernowne constant is similar to the first of these examples in that there's not going to be a short periodic pattern close to the decimal point to clue you in to the fact that the constant is very close to a rational number.</p>
|
What Is The Sum of All of The Real Root | 2,098,496 | 5 | https://math.stackexchange.com/questions/2098496/what-is-the-sum-of-all-of-the-real-root | <p>I found this question on my test:</p>
<blockquote>
<blockquote>
<p>What is the sum of all of the real root of $x^3-4x^2+x=-6$?</p>
<ul>
<li><strong>A.)</strong> $-4 $</li>
<li><strong>A.)</strong> $-2$ </li>
<li><strong>A.)</strong> $-0 $</li>
<li><strong>A.)</strong> $2 $</li>
<li><strong>A.)</strong> $4 $</li>
</ul>
</blockquote>
</blockquote>
<p><strong>My answer: $2 + 3 = 5$</strong> , but that's not an option. Was the question wrong, or I didn't pay attention enough?</p>
| [
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
},
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
},
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
},
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
},
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
},
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
},
{
"answer_id": 2098498,
"score": 6,
"text": "<p>$$x^3-4x^2+x=-6 \\implies x^3-4x^2+x+6=0 \\implies (x-2)(x-3)(x+1)=0 $$</p>\n\n<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>\n\n<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>\n\n<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>\n\n<p>So this would probably be the best way to do it. </p>\n"
},
{
"answer_id": 2098500,
"score": 2,
"text": "<p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>\n"
}
] | Question: What Is The Sum of All of The Real Root
Answer 1: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 2: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
Answer 3: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 4: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
Answer 5: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 6: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
Answer 7: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 8: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
Answer 9: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 10: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
Answer 11: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 12: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
Answer 13: <p>$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$</p>
<p>You missed one solution, $x=-1$. Thus, the answer is $4$. </p>
<p>Some people may suggest that you use Vieta's formula, but IMO that would be unwise. </p>
<p>This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. </p>
<p>So this would probably be the best way to do it. </p>
Answer 14: <p>It is easy to find these roots: $-1,2,3$, so the sum is $4$.</p>
|
Proof that a specific bounded sequence converges? | 2,098,860 | 5 | https://math.stackexchange.com/questions/2098860/proof-that-a-specific-bounded-sequence-converges | <p>$(a_n)$ is a bounded sequence such that $$\lim_{n\to\infty}(a_{2^n}-3a_n)=0$$</p>
<p>Prove that $(a_n)$ converges and determine the limit.</p>
<p>I haven't got a clue of what to do here. I've been told I could watch accumulation points of the sequence but honestly I don't understand how I should proceed. Any kind of hint would be really helpful.
Thanks in advance!</p>
| [
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
},
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
},
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
},
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
},
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
},
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
},
{
"answer_id": 2098883,
"score": 2,
"text": "<p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>\n"
}
] | Question: Proof that a specific bounded sequence converges?
Answer 1: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
Answer 2: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
Answer 3: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
Answer 4: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
Answer 5: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
Answer 6: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
Answer 7: <p><strong>Hint</strong>: If $L$ is a limit point then $3L$ is also a limit point. Since sequence is bounded $L=0$. Thus sequence converges to $0$.</p>
|
Proving that $19\mid 5^{2n+1}+3^{n+2} \cdot 2^{n-1}$ | 2,099,097 | 5 | https://math.stackexchange.com/questions/2099097/proving-that-19-mid-52n13n2-cdot-2n-1 | <p>How can I prove that $$5^{2n+1}+3^{n+2} \cdot 2^{n-1} $$ can be divided by 19 for any nonnegative n? What modulo should I choose?</p>
| [
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
},
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
},
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
},
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
},
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
},
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
},
{
"answer_id": 2099129,
"score": 30,
"text": "<p>For $n=0$, the formula says that $\\left.19\\middle|\\frac{19}2\\right.$, which is false. So consider $n\\ge1$:\n$$\n\\begin{align}\n5^{2n+1}+3^{n+2}2^{n-1}\n&=125\\cdot25^{n-1}+27\\cdot6^{n-1}\\\\\n&\\equiv11\\cdot6^{n-1}+8\\cdot6^{n-1}&\\pmod{19}\\\\\n&=19\\cdot6^{n-1}\\\\\n&\\equiv0&\\pmod{19}\n\\end{align}\n$$\nSince\n$$\n\\begin{align}\n125&\\equiv11&\\pmod{19}\\\\\n25&\\equiv6&\\pmod{19}\\\\\n27&\\equiv8&\\pmod{19}\n\\end{align}\n$$</p>\n"
},
{
"answer_id": 2099103,
"score": 18,
"text": "<p>$5^{2n+1} = 5 \\times 25^n \\equiv 5 \\times 6^n$ (modulo 19).\nHence we have $5^{2n+1} + 3^{n+2}\\times2^{n-1} \\equiv 6^{n-1} \\times (30 + 27) = 6^{n-1}\\times3\\times 19$ (modulo 19)</p>\n"
},
{
"answer_id": 2099138,
"score": 5,
"text": "<p>Its <em>arithmetical</em> essence is clarified using <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Product and Power Rules,</a> namely</p>\n\n<p>$$\\begin{align}{\\rm mod}\\,\\ 19\\!:\\qquad \\big[\\,\\ 5^{\\large\\color{#c00}1} \\ &\\equiv\\, -3^{\\large\\color{#c00} 2}\\ \\cdot\\, 2^{\\large\\color{#c00}{-1}}\\big]\\ \\ \\ {\\rm by}\\ \\ \\ \\ \\,2\\cdot 5\\equiv -3^{\\large 2}\\\\\n \\times\\ \\ \\, \\big[\\, 5^{\\large 2n} &\\equiv\\ \\ \\ 3^{\\large n}\\,\\cdot\\, 2^{\\large n}\\ \\ \\ \\big]\\ \\ \\ {\\rm by} \\ \\ \\Big[\\,5^{\\large 2}\\equiv 3\\cdot 2\\,\\Big]^{\\large n}\\\\[.1em]\n\\Rightarrow\\,\\ 5^{\\large 2n+\\color{#c00}1}\\!&\\equiv -3^{\\large n+\\color{#c00}2} 2^{\\large n\\color{#c00}{-1}}\n\\end{align}\\qquad\\qquad\\quad\\ $$</p>\n\n<p>Therefore we infer that $\\ 19\\mid 5^{\\large 2n+1}+\\ 3^{\\large n+2} 2^{\\large n-1}$ when it is integral, i.e. for all $\\,n\\ge 1$</p>\n\n<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>\n\n<p>$$ \\dfrac{5^{\\large 2n+1}}{3^{\\large n+2}2^{\\large n-1} }\\ \\equiv\\ \\left[\\dfrac{2\\cdot 5}{3\\cdot 3}\\right]\\left[\\dfrac{5^{\\large 2}}{3\\cdot 2}\\right]^{\\large n}\\!\\!\\equiv\\, -1\\cdot 1^{\\large n}\\equiv\\, -1$$</p>\n\n<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\\,1^n\\equiv 1.$</p>\n"
},
{
"answer_id": 2099111,
"score": 4,
"text": "<p>You can prove this by induction.</p>\n\n<hr>\n\n<p><strong>First, show that this is true for $n=1$:</strong></p>\n\n<p>$5^{2\\cdot1+1}+3^{1+2}\\cdot2^{1-1}=19\\cdot8$</p>\n\n<p><strong>Second, assume that this is true for $n$:</strong></p>\n\n<p>$5^{2n+1}+3^{n+2}\\cdot2^{n-1}=19k$</p>\n\n<p><strong>Third, prove that this is true for $n+1$:</strong></p>\n\n<p>$5^{2(n+1)+1}+3^{n+1+2}\\cdot2^{n+1-1}=$</p>\n\n<p>$5^{2+2n+1}+3^{1+n+2}\\cdot2^{1+n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot3^{n+2}\\cdot2^{1}\\cdot2^{n-1}=$</p>\n\n<p>$5^{2}\\cdot5^{2n+1}+3^{1}\\cdot2^{1}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+\\color\\green{6}\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+(\\color\\green{25-19})\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot5^{2n+1}+25\\cdot3^{n+2}\\cdot2^{n-1}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot(\\color\\red{5^{2n+1}+3^{n+2}\\cdot2^{n-1}})-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$25\\cdot\\color\\red{19k}-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot25k-19\\cdot3^{n+2}\\cdot2^{n-1}=$</p>\n\n<p>$19\\cdot(25k-3^{n+2}\\cdot2^{n-1})$</p>\n\n<hr>\n\n<p>Please note that the assumption is used only in the part marked red.</p>\n"
},
{
"answer_id": 2099102,
"score": 2,
"text": "<p>Denote $\\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\\mathcal{P}(1)$ is true. A setup for the proof of $\\mathcal{P}(n+1)$:</p>\n\n<p>\\begin{align} \n5^{2(n+1)+1} + 3^{(n+1)+2}\\cdot 2^{(n+1)-1} & = \\\\\n25 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\\\\n19 \\times 5^{2n+1} + 6 \\times 5^{2n+1} + 6 \\times 3^{n+2}\\cdot 2^{n-1} & = \\cdots\n\\end{align} can you finish the proof from here?</p>\n"
},
{
"answer_id": 2099116,
"score": 2,
"text": "<p>$2^{n-1} = \\frac12 2^n \\equiv 10 \\cdot 2^n \\pmod {19}$. Hence:</p>\n\n<p>$5^{2n+1} +3^{n+2} \\cdot 2^{n-1} \\equiv 5 \\cdot 6^n + 90 \\cdot 6^n \\pmod {19} \\equiv 95 \\cdot 6^n \\pmod {19} \\equiv 0 \\pmod {19}$</p>\n"
}
] | Question: Proving that $19\mid 5^{2n+1}+3^{n+2} \cdot 2^{n-1}$
Answer 1: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 2: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 3: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 4: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 5: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 6: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
Answer 7: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 8: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 9: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 10: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 11: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 12: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
Answer 13: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 14: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 15: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 16: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 17: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 18: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
Answer 19: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 20: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 21: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 22: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 23: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 24: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
Answer 25: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 26: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 27: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 28: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 29: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 30: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
Answer 31: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 32: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 33: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 34: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 35: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 36: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
Answer 37: <p>For $n=0$, the formula says that $\left.19\middle|\frac{19}2\right.$, which is false. So consider $n\ge1$:
$$
\begin{align}
5^{2n+1}+3^{n+2}2^{n-1}
&=125\cdot25^{n-1}+27\cdot6^{n-1}\\
&\equiv11\cdot6^{n-1}+8\cdot6^{n-1}&\pmod{19}\\
&=19\cdot6^{n-1}\\
&\equiv0&\pmod{19}
\end{align}
$$
Since
$$
\begin{align}
125&\equiv11&\pmod{19}\\
25&\equiv6&\pmod{19}\\
27&\equiv8&\pmod{19}
\end{align}
$$</p>
Answer 38: <p>$5^{2n+1} = 5 \times 25^n \equiv 5 \times 6^n$ (modulo 19).
Hence we have $5^{2n+1} + 3^{n+2}\times2^{n-1} \equiv 6^{n-1} \times (30 + 27) = 6^{n-1}\times3\times 19$ (modulo 19)</p>
Answer 39: <p>Its <em>arithmetical</em> essence is clarified using <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules,</a> namely</p>
<p>$$\begin{align}{\rm mod}\,\ 19\!:\qquad \big[\,\ 5^{\large\color{#c00}1} \ &\equiv\, -3^{\large\color{#c00} 2}\ \cdot\, 2^{\large\color{#c00}{-1}}\big]\ \ \ {\rm by}\ \ \ \ \,2\cdot 5\equiv -3^{\large 2}\\
\times\ \ \, \big[\, 5^{\large 2n} &\equiv\ \ \ 3^{\large n}\,\cdot\, 2^{\large n}\ \ \ \big]\ \ \ {\rm by} \ \ \Big[\,5^{\large 2}\equiv 3\cdot 2\,\Big]^{\large n}\\[.1em]
\Rightarrow\,\ 5^{\large 2n+\color{#c00}1}\!&\equiv -3^{\large n+\color{#c00}2} 2^{\large n\color{#c00}{-1}}
\end{align}\qquad\qquad\quad\ $$</p>
<p>Therefore we infer that $\ 19\mid 5^{\large 2n+1}+\ 3^{\large n+2} 2^{\large n-1}$ when it is integral, i.e. for all $\,n\ge 1$</p>
<p><strong>Remark</strong> $ $ It is even clearer when written in fractional form, namely</p>
<p>$$ \dfrac{5^{\large 2n+1}}{3^{\large n+2}2^{\large n-1} }\ \equiv\ \left[\dfrac{2\cdot 5}{3\cdot 3}\right]\left[\dfrac{5^{\large 2}}{3\cdot 2}\right]^{\large n}\!\!\equiv\, -1\cdot 1^{\large n}\equiv\, -1$$</p>
<p>Notice how use of congruence language <em>greatly</em> simplifies the inductive step, reducing it to the <em>trivial</em> induction that $\,1^n\equiv 1.$</p>
Answer 40: <p>You can prove this by induction.</p>
<hr>
<p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$</p>
<p>$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$</p>
<p>$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$</p>
<p>$19\cdot(25k-3^{n+2}\cdot2^{n-1})$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
Answer 41: <p>Denote $\mathcal{P}(n)$ the statement that $5^{2n+1} + 3^{n+2}\cdot 2^{n-1}$ is divisible by $19$. You can check for yourself that $\mathcal{P}(1)$ is true. A setup for the proof of $\mathcal{P}(n+1)$:</p>
<p>\begin{align}
5^{2(n+1)+1} + 3^{(n+1)+2}\cdot 2^{(n+1)-1} & = \\
25 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \\
19 \times 5^{2n+1} + 6 \times 5^{2n+1} + 6 \times 3^{n+2}\cdot 2^{n-1} & = \cdots
\end{align} can you finish the proof from here?</p>
Answer 42: <p>$2^{n-1} = \frac12 2^n \equiv 10 \cdot 2^n \pmod {19}$. Hence:</p>
<p>$5^{2n+1} +3^{n+2} \cdot 2^{n-1} \equiv 5 \cdot 6^n + 90 \cdot 6^n \pmod {19} \equiv 95 \cdot 6^n \pmod {19} \equiv 0 \pmod {19}$</p>
|
When is the matrix $\text{diag}(\mathbf{x}) + \mathbf{1}$ invertible? | 2,099,210 | 5 | https://math.stackexchange.com/questions/2099210/when-is-the-matrix-textdiag-mathbfx-mathbf1-invertible | <p>Given a vector $\mathbf{x} \in \mathbb{R}^N$, let's define:</p>
<p>$$\text{diag}(\mathbf{x}) = \begin{pmatrix}
x_1 & 0 & \ldots & 0 \\
0 & x_2 & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & x_N
\end{pmatrix}.$$</p>
<p>Moreover, let</p>
<p>$$\mathbf{1}= \begin{pmatrix}
1 & 1 & \ldots & 1 \\
1 & 1 & \ldots & 1 \\
\vdots & \vdots & \ddots & \vdots \\
1 & 1 & \ldots & 1
\end{pmatrix}.$$</p>
<p>Here is my question:</p>
<blockquote>
<p>When is the matrix $\mathbf{M} = \text{diag}(\mathbf{x}) + \mathbf{1}$ invertible?</p>
</blockquote>
<p>I was able to find some results when $x_1 = x_2 = \ldots = x_N = x$. Indeed, the matrix $M$ is singular when:</p>
<ol>
<li>$x=0$. This is trivial since $\mathbf{M} = \mathbf{1}$...</li>
<li>$x=-N$. In this case, if you sum up all the rows (or columns) of the matrix $M$, you get the zero vector.</li>
</ol>
<p>What can I say in the general case when $\text{diag}(\mathbf{x})$ is a generic vector?</p>
| [
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
},
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
},
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
},
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
},
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
},
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
},
{
"answer_id": 2099239,
"score": 4,
"text": "<p>We can easily compute the determinant of the sum $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ and check invertibility that way,</p>\n\n<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href=\"https://en.wikipedia.org/wiki/Matrix_determinant_lemma\" rel=\"nofollow noreferrer\">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>\n\n<p>$$ \\det(A+uv^T) = (1 + v^T A^{-1} u) \\det(A) $$</p>\n\n<p>When $A$ is the matrix $\\operatorname{diag}(\\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>\n\n<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\\operatorname{diag}(\\mathbf{x}) + \\mathbf{1}$ is $\\prod_{k=2}^n x_k$.</p>\n"
},
{
"answer_id": 2099265,
"score": 2,
"text": "<p>i think your matrix is nonsingular iff \n$$1 +\\frac 1{x_1}+\\frac 1{x_2}+ \\ldots + \\frac 1{x_n} \\ne 0$$ </p>\n\n<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\\top.$ </p>\n\n<p>suppose $\\lambda, x$ is an eigenvalue-eigenvector pair. then we have \n$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \\lambda x_1, \\ldots, x_1+x_2+x_3+x_4 + d_4x_4=\\lambda x_4 $$ solving for $x_1$ we find that \n$$x_1= \\frac 1{(\\lambda- d_1)}(x_1+x_2+x_3+x_4), \\ldots \nx_4= \\frac 1{(\\lambda- d_4)}\\left(x_1+x_2+x_3+x_4\\right)\\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is \n$$1 = \\frac 1{(\\lambda-d_1)}+\\frac 1{(\\lambda-d_2)}+\\frac 1{(\\lambda-d_3)}+\\frac 1{(\\lambda-d_4)} $$ </p>\n\n<p>therefore the matrix $D + jj^\\top$ is singular iff $$ \\frac 1d_1 + \\frac 1d_2 + \\frac 1d_3 + \\frac 1d_4 + 1 \\ne 0. $$</p>\n"
}
] | Question: When is the matrix $\text{diag}(\mathbf{x}) + \mathbf{1}$ invertible?
Answer 1: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 2: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
Answer 3: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 4: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
Answer 5: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 6: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
Answer 7: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 8: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
Answer 9: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 10: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
Answer 11: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 12: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
Answer 13: <p>We can easily compute the determinant of the sum $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ and check invertibility that way,</p>
<p>If all the diagonal entries $x_i$ are nonzero, we can apply the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="nofollow noreferrer">matrix determinant lemma</a> for a rank one update to an invertible matrix:</p>
<p>$$ \det(A+uv^T) = (1 + v^T A^{-1} u) \det(A) $$</p>
<p>When $A$ is the matrix $\operatorname{diag}(\mathbf{x})$ and $u,v$ are vectors of all ones, this says the matrix sum is invertible unless the sum of the reciprocals $x_i^{-1}$ is $-1$.</p>
<p>If one of the diagonal entries is zero, say $x_1$ without loss of generality, then elementary row operations quickly show that the determinant of $\operatorname{diag}(\mathbf{x}) + \mathbf{1}$ is $\prod_{k=2}^n x_k$.</p>
Answer 14: <p>i think your matrix is nonsingular iff
$$1 +\frac 1{x_1}+\frac 1{x_2}+ \ldots + \frac 1{x_n} \ne 0$$ </p>
<p>i will look at the case $n = 4$ will consider the matrix $A = D + jj^\top$ where $D$ is the diagonal matrix with entries $d_1, d_2, d_3$ and $d_4, j = (1,1,1,1)^\top.$ </p>
<p>suppose $\lambda, x$ is an eigenvalue-eigenvector pair. then we have
$$d_1x_1 + x_1 + x_2+ x_3 + x_4 = \lambda x_1, \ldots, x_1+x_2+x_3+x_4 + d_4x_4=\lambda x_4 $$ solving for $x_1$ we find that
$$x_1= \frac 1{(\lambda- d_1)}(x_1+x_2+x_3+x_4), \ldots
x_4= \frac 1{(\lambda- d_4)}\left(x_1+x_2+x_3+x_4\right)\tag 1 $$ adding the four equations in (1), you find that the characteristic equation of $A$ is
$$1 = \frac 1{(\lambda-d_1)}+\frac 1{(\lambda-d_2)}+\frac 1{(\lambda-d_3)}+\frac 1{(\lambda-d_4)} $$ </p>
<p>therefore the matrix $D + jj^\top$ is singular iff $$ \frac 1d_1 + \frac 1d_2 + \frac 1d_3 + \frac 1d_4 + 1 \ne 0. $$</p>
|
sum of series $(2^2-1)(6^2-1)+(4^2-1)(8^2-1)+\cdots \cdots +\cdots (100^2-1)(104^2-1)$ | 2,099,793 | 5 | https://math.stackexchange.com/questions/2099793/sum-of-series-22-162-142-182-1-cdots-cdots-cdots-1002-1104 | <p>The sum of series $(2^2-1)(6^2-1)+(4^2-1)(8^2-1)+\cdots \cdots +\cdots (100^2-1)(104^2-1)$</p>
<p><strong>Attempt</strong> Assume $\displaystyle S = \sum^{50}_{r=1}((2r)^2-1)((2r+4)^2-1) = \sum^{50}_{r=1}(4r^2-1)(4r^2+16r+15)$</p>
<p>$\displaystyle S = \sum^{50}_{r=1}(16r^4+64r^3+56r^2-16r-15)$</p>
<p>could some help me how to solve it, thanks </p>
| [
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
},
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
},
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
},
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
},
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
},
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
},
{
"answer_id": 2099799,
"score": 10,
"text": "<p>You can find the sum using the idea of telescoping sum.</p>\n\n<p>Since\n$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$\nwe get\n$$\\small\\begin{align}&\\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}\\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\\\\\&=\\frac{1}{10}\\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\\\\\&=\\frac{1}{10}(107\\cdot 105\\cdot 103\\cdot 101\\cdot 99-7\\cdot 5\\cdot 3\\cdot 1\\cdot (-1))\\end{align}$$</p>\n"
},
{
"answer_id": 2099859,
"score": 1,
"text": "<p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href=\"http://mathworld.wolfram.com/PowerSum.html\" rel=\"nofollow noreferrer\">here</a>. This would lead to </p>\n\n<p>$$ S_n = \\sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\\frac{1}{5} \\left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\\right)$$</p>\n\n<p>For sure, much more elegant is mathlove's solution.</p>\n"
}
] | Question: sum of series $(2^2-1)(6^2-1)+(4^2-1)(8^2-1)+\cdots \cdots +\cdots (100^2-1)(104^2-1)$
Answer 1: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 2: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
Answer 3: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 4: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
Answer 5: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 6: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
Answer 7: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 8: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
Answer 9: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 10: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
Answer 11: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 12: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
Answer 13: <p>You can find the sum using the idea of telescoping sum.</p>
<p>Since
$$((2r)^2-1)((2r+4)^2-1)=(2r-1)(2r+1)(2r+3)(2r+5)$$
we get
$$\small\begin{align}&\sum_{r=1}^{50}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{10}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}\color{red}{((2r+7)-(2r-3))}(2r-1)(2r+1)(2r+3)(2r+5)\\\\&=\frac{1}{10}\sum_{r=1}^{50}((2r+7)(2r+5)(2r+3)(2r+1)(2r-1)-(2r+5)(2r+3)(2r+1)(2r-1)(2r-3))\\\\&=\frac{1}{10}(107\cdot 105\cdot 103\cdot 101\cdot 99-7\cdot 5\cdot 3\cdot 1\cdot (-1))\end{align}$$</p>
Answer 14: <p>Being lazy, we could use the classical sums of powers of integer numbers as given <a href="http://mathworld.wolfram.com/PowerSum.html" rel="nofollow noreferrer">here</a>. This would lead to </p>
<p>$$ S_n = \sum^{n}_{r=1}(16r^4+64r^3+56r^2-16r-15)=\frac{1}{5} \left(16 n^5+120 n^4+280 n^3+180 n^2-71 n\right)$$</p>
<p>For sure, much more elegant is mathlove's solution.</p>
|
Feller - Question on two throws of three dice | 2,099,958 | 5 | https://math.stackexchange.com/questions/2099958/feller-question-on-two-throws-of-three-dice | <p>I am stuck at the following question.</p>
<blockquote>
<p>What is the probability that two throws with three dice each will show the same configuration if (a) the dice are distinguishable (b) they are not.</p>
</blockquote>
<p><em>Solution.</em></p>
<p>(a) $P(\text{Two throws of three dice s.t. each show the same configuration})=6^{3}/6^{6}=1/216$.</p>
<p>(b)Firstly from the second set of throws, the $3$ dies with the same face value as the first ones can be chosen in $3!=6$ ways.</p>
<p>Place $r=3$ dies in $n=6$ cells, such that order does not matter with repetition, can be done in ${{3+6-1}\choose{3}}={8\choose 3}=56$</p>
<p>${\displaystyle P(\text{Two throws of three indistinguishable dice show the same configuration})=\frac{56\times6}{6^{6}}}$</p>
<p>However, my answer to the second part of the problem is incorrect. Could someone help me think correctly about the problem.</p>
| [
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
},
{
"answer_id": 2100003,
"score": 4,
"text": "<p>(b)</p>\n\n<p>Three distinct faces :</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{(6\\cdot5\\cdot4)\\times(3\\cdot2\\cdot1)}{6^6}$$</p>\n\n<p>One repetition:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{{3\\choose2}(6\\cdot5)\\times{3\\choose2}}{6^6}$$</p>\n\n<p>Two repetitions:</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{6}{6^6}$$</p>\n\n<p>Thus, the required probability is,</p>\n\n<p>$$P(\\text{getting the same configuration}) = \\frac{996}{6^6}$$</p>\n"
},
{
"answer_id": 2101049,
"score": 3,
"text": "<p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,\n(b) distinguishable.</p>\n\n<p>In the distinguishable case, it may help to imagine the dice are colored\nred, green, and blue, respectively. The probability is pretty clearly\n$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the \noutcomes before comparison because color does not matter.</p>\n\n<pre><code>m = 10^6; same.i = same.d = numeric(m)\nfor (i in 1:m) {\n roll.1 = sample(1:6, 3, repl=T)\n roll.2 = sample(1:6, 3, repl=T)\n same.d[i] = sum(roll.1==roll.2)\n same.i[i] = sum(sort(roll.1)==sort(roll.2)) }\nmean(same.d==3); mean(same.i==3)\n## 0.004632\n## 0.021579\n</code></pre>\n\n<p>With a million iterations, probabilities should be accurate to \nthree places (a little more for very small probabilities), so\nthe first simulated answer is consistent with $1/63.$ And, the\nsecond answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>\n\n<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers\nof faces that agree for distinguishable and indistinguishable dice, respectively.\nhere are the histograms of their respective distributions.\n(The probabilities mainly at issue here are the ones for 3 matches.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/QffyN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QffyN.png\" alt=\"enter image description here\"></a></p>\n"
}
] | Question: Feller - Question on two throws of three dice
Answer 1: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 2: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
Answer 3: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 4: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
Answer 5: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 6: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
Answer 7: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 8: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
Answer 9: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 10: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
Answer 11: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 12: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
Answer 13: <p>(b)</p>
<p>Three distinct faces :</p>
<p>$$P(\text{getting the same configuration}) = \frac{(6\cdot5\cdot4)\times(3\cdot2\cdot1)}{6^6}$$</p>
<p>One repetition:</p>
<p>$$P(\text{getting the same configuration}) = \frac{{3\choose2}(6\cdot5)\times{3\choose2}}{6^6}$$</p>
<p>Two repetitions:</p>
<p>$$P(\text{getting the same configuration}) = \frac{6}{6^6}$$</p>
<p>Thus, the required probability is,</p>
<p>$$P(\text{getting the same configuration}) = \frac{996}{6^6}$$</p>
Answer 14: <p>Simulation: Two successive throws of three fair dice with faces 1 through 6. Probability of same three faces showing: (a) if dice are indistinguishable,
(b) distinguishable.</p>
<p>In the distinguishable case, it may help to imagine the dice are colored
red, green, and blue, respectively. The probability is pretty clearly
$1/6^3 = 0.00462963.$ In simulating for the indistinguishable dice, we sort the
outcomes before comparison because color does not matter.</p>
<pre><code>m = 10^6; same.i = same.d = numeric(m)
for (i in 1:m) {
roll.1 = sample(1:6, 3, repl=T)
roll.2 = sample(1:6, 3, repl=T)
same.d[i] = sum(roll.1==roll.2)
same.i[i] = sum(sort(roll.1)==sort(roll.2)) }
mean(same.d==3); mean(same.i==3)
## 0.004632
## 0.021579
</code></pre>
<p>With a million iterations, probabilities should be accurate to
three places (a little more for very small probabilities), so
the first simulated answer is consistent with $1/63.$ And, the
second answer is consistent with with @Quasar's $996/6^6 = 0.02134774.$ (+1)</p>
<p>For a little more detail, my <code>same.d</code> and <code>same.i</code> are numbers
of faces that agree for distinguishable and indistinguishable dice, respectively.
here are the histograms of their respective distributions.
(The probabilities mainly at issue here are the ones for 3 matches.)</p>
<p><a href="https://i.stack.imgur.com/QffyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QffyN.png" alt="enter image description here"></a></p>
|
Constructive proof of Barber Paradox | 2,100,007 | 5 | https://math.stackexchange.com/questions/2100007/constructive-proof-of-barber-paradox | <p>Q1. Can Barber Paradox be proven false in constructive logic?</p>
<p>I am following the <a href="https://leanprover.github.io/tutorial/" rel="noreferrer">lean tutorial</a> by professor Jeremy Avigad et al. One of the exercises in section 4 asks to prove Barber Paradox false. I can prove it using classical logic, but I can't do it using constructive logic.</p>
<p>Specifically, my proof needs classical logic to prove the lemma</p>
<pre><code>lemma contradictionEquivalence (p : Prop) : ¬(p <-> ¬p) := sorry
</code></pre>
<p>The proof of the lemma is straightforward. It uses equivalence elimination and excluded middle followed by two short lambda applications.</p>
<pre><code>variable p : Prop
open classical
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have ponp : p ∨ ¬p, from (em p),
or.elim ponp
(take Hp : p, Hpnp Hp Hp)
(take Hnp : ¬p, Hpnp (Hnpp Hnp) (Hnpp Hnp))
</code></pre>
<p>With the help of this lemma Barber Paradox can be proven false in a single lambda application.</p>
<pre><code>variables (men : Type) (barber : men) (shaves : men → men → Prop)
variable (H : ∀ x : men, shaves barber x ↔ ¬shaves x x)
theorem barberParadox : false :=
contradictionEquivalence
(shaves barber barber)
(H barber)
</code></pre>
<p>If I could prove the lemma <code>contradictionEquivalence</code> in constructive logic, I could answer Q1 in the affirmative.</p>
<p>I suspect that the lemma can be proven in constructive logic, because giving its constructive proof is an exercise in section 3 of the tutorial, but nothing has worked for me so far, so any help would be greatly appreciated.</p>
| [
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
},
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
},
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
},
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
},
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
},
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
},
{
"answer_id": 2100031,
"score": 4,
"text": "<p>Assume $p\\to\\neg p$ and $\\neg p \\to p$.</p>\n\n<ul>\n<li>Asume $p$. Then, since $p\\to\\neg p$, also $\\neg p$. But $p$ and $\\neg p$ are a contradiction.</li>\n<li>Thus, we have proved $\\neg p$.</li>\n<li>Since $\\neg p\\to p$, also $p$.</li>\n<li>But $\\neg p$ and $p$ are a contradiction.</li>\n</ul>\n\n<p>The assumption $(p\\to \\neg p)\\land(\\neg p\\to p)$ led to a contradiction, so \n$\\neg((p\\to \\neg p)\\land(\\neg p\\to p))$.</p>\n"
},
{
"answer_id": 2100525,
"score": 1,
"text": "<p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>\n\n<pre><code>variable p : Prop\n\nlemma contradictionEquivalence : ¬(p <-> ¬p) := \n assume LHS : (p <-> ¬p),\n have Hpnp : (p -> ¬p), from (iff.elim_left LHS),\n have Hnpp : (¬p -> p), from (iff.elim_right LHS),\n have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),\n have Hp : p, from Hnpp Hnp,\n show false, from Hnp Hp\n</code></pre>\n"
}
] | Question: Constructive proof of Barber Paradox
Answer 1: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 2: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
Answer 3: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 4: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
Answer 5: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 6: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
Answer 7: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 8: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
Answer 9: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 10: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
Answer 11: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 12: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
Answer 13: <p>Assume $p\to\neg p$ and $\neg p \to p$.</p>
<ul>
<li>Asume $p$. Then, since $p\to\neg p$, also $\neg p$. But $p$ and $\neg p$ are a contradiction.</li>
<li>Thus, we have proved $\neg p$.</li>
<li>Since $\neg p\to p$, also $p$.</li>
<li>But $\neg p$ and $p$ are a contradiction.</li>
</ul>
<p>The assumption $(p\to \neg p)\land(\neg p\to p)$ led to a contradiction, so
$\neg((p\to \neg p)\land(\neg p\to p))$.</p>
Answer 14: <p>The answer of Henning Makholm is almost perfect, with one exception: according to the tutorial, proof by contradiction <code>by_contradiction</code> can only be used in the classical setting. I have worked around it by using the following lambda application <code>Hpnp Hp Hp</code>. This is a full constructive proof of the lemma:</p>
<pre><code>variable p : Prop
lemma contradictionEquivalence : ¬(p <-> ¬p) :=
assume LHS : (p <-> ¬p),
have Hpnp : (p -> ¬p), from (iff.elim_left LHS),
have Hnpp : (¬p -> p), from (iff.elim_right LHS),
have Hnp : ¬p, from (assume Hp : p, Hpnp Hp Hp),
have Hp : p, from Hnpp Hnp,
show false, from Hnp Hp
</code></pre>
|
How many three digit numbers have all three digits that are different? | 2,100,295 | 5 | https://math.stackexchange.com/questions/2100295/how-many-three-digit-numbers-have-all-three-digits-that-are-different | <p>How do I solve this?</p>
<p>Please explain in detail. I am told the answer is $9\times9\times8$ but I do not understand how that would lead to the solution.</p>
| [
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
},
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
},
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
},
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
},
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
},
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
},
{
"answer_id": 2100336,
"score": 2,
"text": "<p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>\n\n<p>Now, by the principle of counting, total no. of ways are $9\\times 9\\times 8$ .</p>\n"
},
{
"answer_id": 2100335,
"score": 1,
"text": "<p>Imagine a three digit number $\\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\\times 9\\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>\n"
}
] | Question: How many three digit numbers have all three digits that are different?
Answer 1: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 2: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
Answer 3: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 4: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
Answer 5: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 6: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
Answer 7: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 8: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
Answer 9: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 10: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
Answer 11: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 12: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
Answer 13: <p>Since a three digit number cannot have a zero at hundredth place , therefore the first digit from left can be chosen from any one of $1$-$9$ digits ,i.e., hundredth place can be filled in $9$ ways . The tenth place can have a zero, but not the digit at the hundredth place,i.e., there are $9$ ways the tenth place (i.e. the middle digit) can be filled with . Similarly , the units place can have a zero as a digit but not the digits at hundredth and tenth place and hence there are $8$ ways the units place can be filled with. </p>
<p>Now, by the principle of counting, total no. of ways are $9\times 9\times 8$ .</p>
Answer 14: <p>Imagine a three digit number $\overline{abc}$. For it to have three digits, then $a$ can't be zero, so you have $9$ possibilities for $a$ ($1$ - $9$). For $b$ you have $9$ possibilities as well, since you can now use $0$ but can't use the number in $a$. Finally, for $c$ you can't use the number in $a$ and $b$, so you are left with $8$ possibilities. The number of numbers that match all this is $9\times 9\times 8$. Why are we multiplying? Because for every a you have $9$ possible values of $b$ and 8 possible values of $c$, etc.</p>
|
What will be the 2014th digit? | 2,100,441 | 5 | https://math.stackexchange.com/questions/2100441/what-will-be-the-2014th-digit | <blockquote>
<p>Let <span class="math-container">$a$</span> be an integer <span class="math-container">$\underbrace{33\ldots 33}_{2013}$</span> and let <span class="math-container">$b$</span> be the integer <span class="math-container">$\underbrace{66\ldots66}_{2013}$</span></p>
<p>What is the 2014th digit (from the right) that appears in the product <span class="math-container">$ab$</span>?</p>
</blockquote>
<p>I tried using mod, but is it the correct way to approach this problem?</p>
| [
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
},
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
},
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
},
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
},
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
},
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
},
{
"answer_id": 2100515,
"score": 9,
"text": "<p>\\begin{align}\nab&=18\\cdot[\\frac{10^{2013}-1}{9}]^2=\\frac29[10^{4026}-2\\cdot10^{2013}+1]\\\\\n&=\\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\\\\n&=2\\cdot\\frac{10^{4026}-1}9-4\\cdot\\frac{10^{2013}-1}9\\\\\n&=\\underbrace{22\\ldots 2}_{4026}-\\underbrace{44\\ldots 4}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2013}\\underbrace{00\\ldots 0}_{2013}-\\underbrace{22\\ldots 2}_{2013}\\\\\n&=\\underbrace{22\\ldots 2}_{2012}1\\underbrace{77\\ldots 7}_{2012}8\n\\\\\n\\end{align}</p>\n"
},
{
"answer_id": 2100476,
"score": 5,
"text": "<p>Your number $ab$ is $18\\cdot \\underbrace{11\\ldots 11}_{2013}{}^2$.</p>\n\n<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>\n\n<p>(To see this pattern, consider why $111{,}111{,}111\\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of \"2014th digit\" here the 10th digit, which is the leftmost of the two 8s).</p>\n\n<pre><code> 2 0 1 2\n+ 2 0 1 3\n+ 2 0 1 2\n+ 2 0 1 1\n+ 2 0 1 0\n+ .......\n ---------------------\n= ?\n</code></pre>\n\n<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>\n\n<pre><code> 3 6 2 1 6\n+ 3 6 2 3 4\n+ 3 6 2 1 6\n+ 3 6 1 9 8\n+ 3 6 1 8 0\n+ 3 6 1 6 2\n+ .........\n------------------------\n ?\n</code></pre>\n\n<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>\n\n<p>Therefore, summing the 2014th column, including the carry, we get\n$$ (6+3+2+6+3)+1 = 20+1 = 21 $$\nso the final digit in the 2014th column is <code>1</code>.</p>\n"
}
] | Question: What will be the 2014th digit?
Answer 1: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 2: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
Answer 3: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 4: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
Answer 5: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 6: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
Answer 7: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 8: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
Answer 9: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 10: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
Answer 11: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 12: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
Answer 13: <p>\begin{align}
ab&=18\cdot[\frac{10^{2013}-1}{9}]^2=\frac29[10^{4026}-2\cdot10^{2013}+1]\\
&=\frac29[(10^{4026}-1)-2(10^{2013}-1)]\\
&=2\cdot\frac{10^{4026}-1}9-4\cdot\frac{10^{2013}-1}9\\
&=\underbrace{22\ldots 2}_{4026}-\underbrace{44\ldots 4}_{2013}\\
&=\underbrace{22\ldots 2}_{2013}\underbrace{00\ldots 0}_{2013}-\underbrace{22\ldots 2}_{2013}\\
&=\underbrace{22\ldots 2}_{2012}1\underbrace{77\ldots 7}_{2012}8
\\
\end{align}</p>
Answer 14: <p>Your number $ab$ is $18\cdot \underbrace{11\ldots 11}_{2013}{}^2$.</p>
<p>Let's imagine multiplying out the square of the repdigit with the usual pencil-and-paper algorithm. Before we start handling carries between the columns, the 2014th column will sum to $2012$, the one just to the right of it sums to $2013$, the one to the right of <em>that</em> sums to $2012$, then $2011$ and so forth (and what happens to the left is immaterial):</p>
<p>(To see this pattern, consider why $111{,}111{,}111\cdot 111{,}111{,}111= 12{,}345{,}678{,}987{,}654{,}321$, right before the columns start carrying into each other. The analogue of "2014th digit" here the 10th digit, which is the leftmost of the two 8s).</p>
<pre><code> 2 0 1 2
+ 2 0 1 3
+ 2 0 1 2
+ 2 0 1 1
+ 2 0 1 0
+ .......
---------------------
= ?
</code></pre>
<p>For simplicity, lets pre-multiply each of the rows here by $18$; then we will be adding</p>
<pre><code> 3 6 2 1 6
+ 3 6 2 3 4
+ 3 6 2 1 6
+ 3 6 1 9 8
+ 3 6 1 8 0
+ 3 6 1 6 2
+ .........
------------------------
?
</code></pre>
<p>In <em>this</em> addition there are only 5 nonzero digits in each column, so each carry is at most 4. The digits in the 2013th column sum to $4+1+1+6+3=15$, so depending on what the incoming carry is, the 2013th digit of the result is one of 5,6,7,8,9, and <em>the carry into the 2014th column is necessarily $1$</em>.</p>
<p>Therefore, summing the 2014th column, including the carry, we get
$$ (6+3+2+6+3)+1 = 20+1 = 21 $$
so the final digit in the 2014th column is <code>1</code>.</p>
|
Proof that $\frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b} \ge \frac{3}{7}$ for all $a,b,c > 0$ | 2,100,641 | 5 | https://math.stackexchange.com/questions/2100641/proof-that-fracaa3b3c-fracbb3a3c-fraccc3a3b-ge-frac37 | <p>So I am trtying to proof that $\frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b} \ge \frac{3}{7}$ for all $a,b,c > 0$. First I tried with Cauchy–Schwarz inequality but got nowhere.<br>
Now I am trying to find a convex function, so I can use jensen's inequality, but I can't come up with one which works.. Has anyone an idea?</p>
| [
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101771,
"score": 3,
"text": "<p>Note that\n\\begin{align*}\n(a+b+c)^2 \\ge 3ab+3bc+3ac.\n\\end{align*}\nTherefore,\n\\begin{align*}\n&\\ \\frac{a}{a+3b+3c}+\\frac{b}{b+3a+3c}+\\frac{c}{c+3a+3b}\\\\\n=&\\ \\frac{a^2}{a^2+3ab+3ac}+\\frac{b^2}{b^2+3ab+3bc}+\\frac{c^2}{c^2+3ac+3bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\\\\n=&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\\\\n\\ge&\\ \\frac{(a+b+c)^2}{(a+b+c)^2 + \\frac{4}{3}(a+b+c)^2}\\\\\n=&\\ \\frac{3}{7}.\n\\end{align*}</p>\n"
},
{
"answer_id": 2100652,
"score": 2,
"text": "<p>Cauchy-Schwarz actually works pretty well.</p>\n\n<p>We have with Cauchy-Schwarz if we put $x_1=\\sqrt{\\frac{a}{a+3b+3c}}$, $x_2=\\sqrt{\\frac{b}{b+3c+3a}}$ and $x_3=\\sqrt{\\frac{c}{c+3a+3b}}$ as well as $y_1=\\sqrt{a(a+3b+3c)}$, $y_2=\\sqrt{b(b+3c+3a)}$ and $y_3=\\sqrt{c(c+3a+3b)}$\n$$\n\\left(x_1^2+x_2^2+x_3^2\\right)\\left(y_1^2+y_2^2+y_3^2\\right)≥\\left(x_1y_1+x_2y_2+x_3y_3\\right)^2\\iff\\\\\n\\left(\\frac{a}{a+3b+3c}+\\frac{b}{b+3c+3a}+\\frac{c}{c+3a+3b}\\right)\\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\\right)≥(a+b+c)^2\n$$\nso its enough to prove\n$$\n\\frac{(a+b+c)^2}{\\sum_{cyc}a(a+3b+3c)}≥\\frac37\\iff\\\\\n7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\\iff\\\\\na^2+b^2+c^2≥ab+bc+ca\n$$\nwhich is true.</p>\n"
},
{
"answer_id": 2100665,
"score": 2,
"text": "<p><strong>Hint</strong></p>\n\n<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>so we have to find the minimum for the function:</p>\n\n<p>$$f(a,b,c)=\\frac{a}{1-2a}+\\frac{b}{1-2b}+\\frac{c}{1-2c}$$</p>\n\n<p>with boundary</p>\n\n<p>$$a+b+c=\\frac{1}{3}$$</p>\n\n<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>\n"
},
{
"answer_id": 2100682,
"score": 2,
"text": "<p>moving $$\\frac{3}{7}$$ to the left and clearing the denominators we get\n$$3\\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\\,abc-a{c}^{2}+3\\,{b}^{3}-{b}^{\n2}c-b{c}^{2}+3\\,{c}^{3}\n\\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\\geq 0$$\nwhich is clear\n(the first Terms are AM-GM)</p>\n"
},
{
"answer_id": 2101102,
"score": 2,
"text": "<p>WLOG $a\\geq b\\geq c.$ $$\\text {Let } x=1/(a+3b+3c),\\; y=1/(b+3c+3a),\\; z=1/(c+3a+3b).$$ We have $a+3b+3c\\leq b+3c+3a\\leq c+3b+3a.$ Therefore $$(1).\\quad x\\geq y\\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>\n\n<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\\partial f/\\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\\quad f(a,b,c)\\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\\partial f/\\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \\quad f(a,b,c)\\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\\geq f(b,b,c)\\geq f(b,b,b)=3/7.$$</p>\n"
},
{
"answer_id": 2101838,
"score": 2,
"text": "<p>We need to prove that \n$$\\sum\\limits_{cyc}\\left(\\frac{a}{a+3b+3c}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{2a-b-c}{a+3b+3c}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-b-(c-a)}{a+3b+3c}\\geq0$$ or\n$$\\sum_{cyc}(a-b)\\left(\\frac{1}{a+3b+3c}-\\frac{1}{b+3c+3a}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\\geq0.$$\nDone!</p>\n"
},
{
"answer_id": 2101845,
"score": 1,
"text": "<p>Another way.\nLet $a+b+c=3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum\\limits_{cyc}\\frac{a}{9-2a}\\geq\\frac{3}{7}$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a}{9-2a}-\\frac{1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{a-1}{9-2a}\\geq0$$ or\n$$\\sum\\limits_{cyc}\\left(\\frac{a-1}{9-2a}-\\frac{a-1}{7}\\right)\\geq0$$ or\n$$\\sum\\limits_{cyc}\\frac{(a-1)^2}{9-2a}\\geq0.$$\nDone!</p>\n"
}
] | Question: Proof that $\frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b} \ge \frac{3}{7}$ for all $a,b,c > 0$
Answer 1: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 2: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 3: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 4: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 5: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 6: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 7: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
Answer 8: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 9: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 10: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 11: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 12: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 13: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 14: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
Answer 15: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 16: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 17: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 18: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 19: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 20: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 21: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
Answer 22: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 23: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 24: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 25: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 26: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 27: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 28: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
Answer 29: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 30: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 31: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 32: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 33: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 34: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 35: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
Answer 36: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 37: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 38: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 39: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 40: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 41: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 42: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
Answer 43: <p>Note that
\begin{align*}
(a+b+c)^2 \ge 3ab+3bc+3ac.
\end{align*}
Therefore,
\begin{align*}
&\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\
=&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\
\ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\
=&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\
\ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\
=&\ \frac{3}{7}.
\end{align*}</p>
Answer 44: <p>Cauchy-Schwarz actually works pretty well.</p>
<p>We have with Cauchy-Schwarz if we put $x_1=\sqrt{\frac{a}{a+3b+3c}}$, $x_2=\sqrt{\frac{b}{b+3c+3a}}$ and $x_3=\sqrt{\frac{c}{c+3a+3b}}$ as well as $y_1=\sqrt{a(a+3b+3c)}$, $y_2=\sqrt{b(b+3c+3a)}$ and $y_3=\sqrt{c(c+3a+3b)}$
$$
\left(x_1^2+x_2^2+x_3^2\right)\left(y_1^2+y_2^2+y_3^2\right)≥\left(x_1y_1+x_2y_2+x_3y_3\right)^2\iff\\
\left(\frac{a}{a+3b+3c}+\frac{b}{b+3c+3a}+\frac{c}{c+3a+3b}\right)\left(a(a+3b+3c)+b(b+3c+3a)+c(c+3a+3b)\right)≥(a+b+c)^2
$$
so its enough to prove
$$
\frac{(a+b+c)^2}{\sum_{cyc}a(a+3b+3c)}≥\frac37\iff\\
7(a^2+b^2+c^2+2ab+2bc+2ca)≥3(a^2+b^2+c^2+6ab+6bc+6ca)\iff\\
a^2+b^2+c^2≥ab+bc+ca
$$
which is true.</p>
Answer 45: <p><strong>Hint</strong></p>
<p>The inequality is homogeneous in $(a,b,c)$ so we can choose them such that</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>so we have to find the minimum for the function:</p>
<p>$$f(a,b,c)=\frac{a}{1-2a}+\frac{b}{1-2b}+\frac{c}{1-2c}$$</p>
<p>with boundary</p>
<p>$$a+b+c=\frac{1}{3}$$</p>
<p>Now can use Lagrange Multiplier and find the minimum $3/7$</p>
Answer 46: <p>moving $$\frac{3}{7}$$ to the left and clearing the denominators we get
$$3\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-3\,abc-a{c}^{2}+3\,{b}^{3}-{b}^{
2}c-b{c}^{2}+3\,{c}^{3}
\geq 0$$ and this is equivalent to $$a^3+b^3+c^3-3abc+(a-b)(a^2-b^2)+(b^2-c^2)(b-c)+(a^2-c^2)(a-c)\geq 0$$
which is clear
(the first Terms are AM-GM)</p>
Answer 47: <p>WLOG $a\geq b\geq c.$ $$\text {Let } x=1/(a+3b+3c),\; y=1/(b+3c+3a),\; z=1/(c+3a+3b).$$ We have $a+3b+3c\leq b+3c+3a\leq c+3b+3a.$ Therefore $$(1).\quad x\geq y\geq z>0.$$ Let $f(a,b,c)=ax+by+cz.$</p>
<p>Differentiating $f$ by $c,$ keeping $a$ and $b$ constant, we have $$\partial f/\partial c=-3ax^2-3by^2+3(a+b)z^2=3(a(z^2-x^2)+b(z^2-y^2)).$$ This is not positive by (1). Therefore $$(2).\quad f(a,b,c)\geq f(a,b,b).$$ Differentiating $f$ by $a,$ keeping $b$ and $c$ constant, we have $$\partial f/\partial a=(3b+3c)x^2-3by^2-3cz^2=3(b(x^2-y^2)+c(x^2-y^2)).$$ This is not negative by (1). Therefore $$(3). \quad f(a,b,c)\geq f(b,b,c).$$ By(2) and (3) we have $$f(a,b,c)\geq f(b,b,c)\geq f(b,b,b)=3/7.$$</p>
Answer 48: <p>We need to prove that
$$\sum\limits_{cyc}\left(\frac{a}{a+3b+3c}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{2a-b-c}{a+3b+3c}\geq0$$ or
$$\sum\limits_{cyc}\frac{a-b-(c-a)}{a+3b+3c}\geq0$$ or
$$\sum_{cyc}(a-b)\left(\frac{1}{a+3b+3c}-\frac{1}{b+3c+3a}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-b)^2}{(a+3b+3c)(b+3c+3a)}\geq0.$$
Done!</p>
Answer 49: <p>Another way.
Let $a+b+c=3$.</p>
<p>Hence, we need to prove that
$$\sum\limits_{cyc}\frac{a}{9-2a}\geq\frac{3}{7}$$ or
$$\sum\limits_{cyc}\left(\frac{a}{9-2a}-\frac{1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{a-1}{9-2a}\geq0$$ or
$$\sum\limits_{cyc}\left(\frac{a-1}{9-2a}-\frac{a-1}{7}\right)\geq0$$ or
$$\sum\limits_{cyc}\frac{(a-1)^2}{9-2a}\geq0.$$
Done!</p>
|
Show $\text{det}(A)$ is divisible by $2^{n-1}$ | 2,101,303 | 5 | https://math.stackexchange.com/questions/2101303/show-textdeta-is-divisible-by-2n-1 | <p>Let $A_{n \times n}$ be a $n$ by $n$ matrix with all entries equal to $\pm 1$. I want to show $\text{det}(A)$ is divisible by $2^{n-1}$. How can I appraoch this?</p>
| [
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
},
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
},
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
},
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
},
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
},
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
},
{
"answer_id": 2101310,
"score": 10,
"text": "<p>After one step of Gauß algorithm, the matrix looks like</p>\n\n<p>$$\\begin{pmatrix}\\pm 1 & \\star \\\\ 0 & B\\end{pmatrix},$$</p>\n\n<p>where $B$ is a matrix, whose entries are all in $\\{0, \\pm 2\\}$. Thus you can use <a href=\"https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an\">Show $\\text{det}(A)$ is divisible by $a^n$</a> .</p>\n"
},
{
"answer_id": 2101359,
"score": 1,
"text": "<p>For $n = 1$, the claim is obviously true.\n<p>\nProceed by induction.\n<p>\nAssume the claim holds for a given positive integer $n$.\n<p>\nLet $A = (a_{ij})$ be an $(n + 1) \\times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.\n<p>\nBy successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\\prime$.\n<p>\nThen $|\\text{det}(A^\\prime) - \\text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.\n<p>\nBy the inductive hypothesis, $2^{n-1}\\,|\\,c_{ij}$, hence $2^n|(\\text{det}(A^\\prime) - \\text{det}(A))$.\n<p>\nEquivalently, $\\text{det}(A^\\prime) \\equiv \\text{det}(A) \\pmod {2^n}$.\n<p>\nThus, as the transitions progress, the determinants mod $2^n$ stay the same.\n<p>\nBut at the end of all the transitions, the final matrix is the $(n + 1) \\times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.\n<p>\nIt follows that $\\text{det}(A) \\equiv 0 \\pmod {2^n}$, hence $2^n\\,|\\,\\text{det}(A)$, which completes the induction.</p>\n\n<p>\n"
}
] | Question: Show $\text{det}(A)$ is divisible by $2^{n-1}$
Answer 1: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 2: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
Answer 3: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 4: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
Answer 5: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 6: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
Answer 7: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 8: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
Answer 9: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 10: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
Answer 11: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 12: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
Answer 13: <p>After one step of Gauß algorithm, the matrix looks like</p>
<p>$$\begin{pmatrix}\pm 1 & \star \\ 0 & B\end{pmatrix},$$</p>
<p>where $B$ is a matrix, whose entries are all in $\{0, \pm 2\}$. Thus you can use <a href="https://math.stackexchange.com/questions/2101302/show-textdeta-is-divisible-by-an">Show $\text{det}(A)$ is divisible by $a^n$</a> .</p>
Answer 14: <p>For $n = 1$, the claim is obviously true.
<p>
Proceed by induction.
<p>
Assume the claim holds for a given positive integer $n$.
<p>
Let $A = (a_{ij})$ be an $(n + 1) \times (n+1)$ matrix such $|a_{ij}| = 1$, for all $i,j$.
<p>
By successive changes, one entry at a time, change the entries of $A$ so that they all are equal to $+1$. In other words, at each stage, find one entry of $A$, if any, which is equal to $-1$ (it doesn't matter which), and change the value from $-1$ to $+1$, yielding a new matrix $A^\prime$.
<p>
Then $|\text{det}(A^\prime) - \text{det}(A)| = |2c_{ij}|$, where $c_{ij}$ is the cofactor of $A$ at row $i$, column $j$.
<p>
By the inductive hypothesis, $2^{n-1}\,|\,c_{ij}$, hence $2^n|(\text{det}(A^\prime) - \text{det}(A))$.
<p>
Equivalently, $\text{det}(A^\prime) \equiv \text{det}(A) \pmod {2^n}$.
<p>
Thus, as the transitions progress, the determinants mod $2^n$ stay the same.
<p>
But at the end of all the transitions, the final matrix is the $(n + 1) \times (n+1)$ matrix of all ones, hence (being singular) has deteminant $0$.
<p>
It follows that $\text{det}(A) \equiv 0 \pmod {2^n}$, hence $2^n\,|\,\text{det}(A)$, which completes the induction.</p>
<p>
|
Limit of a function with powers (L'Hopital doesnt work) | 2,101,320 | 5 | https://math.stackexchange.com/questions/2101320/limit-of-a-function-with-powers-lhopital-doesnt-work | <p>I have a little problem with limit of this function:</p>
<p>$\lim_{x \to \infty} x^2(2017^{\frac{1}{x}} - 2017^{\frac{1}{x+1}})$</p>
<p>I have tried de L'Hopital rule twice, but it doesn't work. Now I have no idea how to do it.</p>
| [
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
},
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
},
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
},
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
},
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
},
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
},
{
"answer_id": 2101353,
"score": 6,
"text": "<p>For finite $a>0,$</p>\n\n<p>$$\\lim_{x\\to\\infty}x^2(a^{1/x}-a^{1/(x+1)})=\\lim_{x\\to\\infty}a^{1/(x+1)}\\lim_{x\\to\\infty}x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)$$</p>\n\n<p>Now $\\displaystyle x^2\\left(a^{\\{1/x-1/(x+1)\\}}-1\\}\\right)=\\dfrac{a^{\\frac1{x(x+1)}}-1}{\\frac1{x(x+1)}}\\cdot\\dfrac1{1+\\frac1x}$</p>\n\n<p>Set $\\frac1{x(x+1)}=y$ to use $\\lim_{y\\to0}\\dfrac{a^y-1}y=\\ln a$</p>\n"
},
{
"answer_id": 2101329,
"score": 4,
"text": "<p>Let us consider $$A=x^2(a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}})$$ $$a^{\\frac{1}{x}}=e^{\\frac{\\log(a)}x}=1+\\frac{\\log (a)}{x}+\\frac{\\log ^2(a)}{2 x^2}+\\frac{\\log ^3(a)}{6\n x^3}+O\\left(\\frac{1}{x^4}\\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>\n\n<p>For the first term,\n$$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\log(a)\\left(\\frac{1}{x}-\\frac{1}{x+1} \\right)+\\cdots=\\log(a)\\frac{x+1-x}{x(x+1)}+\\cdots=\\frac{\\log(a)}{x(x+1)}+\\cdots$$ and so on.</p>\n\n<p>You should arrive to $$a^{\\frac{1}{x}} -a^{\\frac{1}{x+1}}=\\frac{\\log (a)}{x^2}+\\frac{\\log ^2(a)-\\log\n (a)}{x^3}+O\\left(\\frac{1}{x^4}\\right)$$ whcih will show the limit and how it is approached.</p>\n"
},
{
"answer_id": 2101425,
"score": 2,
"text": "<p>Since $\\lim_{x\\to\\infty}2017^{1/(x+1)}=1$, your limit is\n$$\n\\lim_{x\\to\\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)\n=\n\\lim_{x\\to\\infty}x^2(2017^{1/(x^2+x)}-1)\\\\\n$$\nNow do $t^{-1}=x^2+x$, so\n$$\nx^2=\\frac{t+2-\\sqrt{t^2+4t}}{2t}\n$$\nand you get\n$$\n\\lim_{t\\to0^+}\\frac{(2017^t-1)}{t}\\frac{t+2-\\sqrt{t^2+4t}}{2}=\\ln2017\n$$</p>\n"
},
{
"answer_id": 2101324,
"score": 1,
"text": "<p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href=\"https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D\" rel=\"nofollow noreferrer\">confirms</a>, \n$$\n2017^\\frac{1}{x} - 2017^\\frac{1}{x+1} = \\frac{\\log (2017)}{x^2} + o(1/x^2)\n$$\nas $x \\to +\\infty$, and therefore the limit equals $\\log (2017)$.</p>\n"
}
] | Question: Limit of a function with powers (L'Hopital doesnt work)
Answer 1: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 2: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 3: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 4: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
Answer 5: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 6: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 7: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 8: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
Answer 9: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 10: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 11: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 12: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
Answer 13: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 14: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 15: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 16: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
Answer 17: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 18: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 19: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 20: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
Answer 21: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 22: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 23: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 24: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
Answer 25: <p>For finite $a>0,$</p>
<p>$$\lim_{x\to\infty}x^2(a^{1/x}-a^{1/(x+1)})=\lim_{x\to\infty}a^{1/(x+1)}\lim_{x\to\infty}x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)$$</p>
<p>Now $\displaystyle x^2\left(a^{\{1/x-1/(x+1)\}}-1\}\right)=\dfrac{a^{\frac1{x(x+1)}}-1}{\frac1{x(x+1)}}\cdot\dfrac1{1+\frac1x}$</p>
<p>Set $\frac1{x(x+1)}=y$ to use $\lim_{y\to0}\dfrac{a^y-1}y=\ln a$</p>
Answer 26: <p>Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6
x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. </p>
<p>For the first term,
$$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on.</p>
<p>You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log
(a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.</p>
Answer 27: <p>Since $\lim_{x\to\infty}2017^{1/(x+1)}=1$, your limit is
$$
\lim_{x\to\infty}2017^{1/(x+1)}x^2(2017^{1/(x^2+x)}-1)
=
\lim_{x\to\infty}x^2(2017^{1/(x^2+x)}-1)\\
$$
Now do $t^{-1}=x^2+x$, so
$$
x^2=\frac{t+2-\sqrt{t^2+4t}}{2t}
$$
and you get
$$
\lim_{t\to0^+}\frac{(2017^t-1)}{t}\frac{t+2-\sqrt{t^2+4t}}{2}=\ln2017
$$</p>
Answer 28: <p>The best idea is probably to expand in power series (if you are allowed to do that). As Wolfram Alpha <a href="https://www.wolframalpha.com/input/?i=Series%5B2017%5E(1%2Fx)-2017%5E(1%2F(x-1)),%7Bx,%2BInfinity,5%7D%5D" rel="nofollow noreferrer">confirms</a>,
$$
2017^\frac{1}{x} - 2017^\frac{1}{x+1} = \frac{\log (2017)}{x^2} + o(1/x^2)
$$
as $x \to +\infty$, and therefore the limit equals $\log (2017)$.</p>
|
Limit $\lim_{x \to 0} \frac{\sin[x]}{[x]}$ | 2,101,344 | 5 | https://math.stackexchange.com/questions/2101344/limit-lim-x-to-0-frac-sinxx | <p>I am trying to find the limit of $$\lim_{x \to 0} \frac{\sin[x]}{[x]}$$ where [.] represents the greatest integer function.
I tried to take up an infinitesimally small number $h$ and took up the Right Hand Limit and Left Hand limit</p>
<p>$$\lim_{x \to 0^+}\frac{\sin[x]}{[x]}$$
$$\Rightarrow \lim_{h \to 0} \frac{\sin[h]}{[h]}$$</p>
<p>I am stuck over here, though I know that $$\lim_{x \to 0}\frac{\sin x}{x}=1$$ But here I see that since $h$ is a very small positive number$[h]$ itself becomes zero and we get<br>
$$\Rightarrow \frac{\sin 0}{0}.$$ Does this shows that the RHL doesn't exist or am I at fault somewhere? </p>
| [
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
},
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
},
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
},
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
},
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
},
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
},
{
"answer_id": 2101350,
"score": 5,
"text": "<p>For $-1 \\le x<0$, you have $[x]=-1$, so for $x$ in that interval:\n$$\\frac{\\sin[x]}{[x]}=\\frac{\\sin(-1)}{-1}=\\sin 1$$\nWhich makes the left-handed limit $\\sin 1$.</p>\n\n<p>For $0 \\le x<1$, you have $[x]=0$ so $\\tfrac{\\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>\n"
},
{
"answer_id": 2101352,
"score": 5,
"text": "<p>The domain of the function excludes $[0,1)$, so that</p>\n\n<p>$$\\lim_{x\\to0}f(x)=\\lim_{x\\to0^-}f(x)=-\\sin(-1).$$</p>\n"
}
] | Question: Limit $\lim_{x \to 0} \frac{\sin[x]}{[x]}$
Answer 1: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 2: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
Answer 3: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 4: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
Answer 5: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 6: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
Answer 7: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 8: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
Answer 9: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 10: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
Answer 11: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 12: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
Answer 13: <p>For $-1 \le x<0$, you have $[x]=-1$, so for $x$ in that interval:
$$\frac{\sin[x]}{[x]}=\frac{\sin(-1)}{-1}=\sin 1$$
Which makes the left-handed limit $\sin 1$.</p>
<p>For $0 \le x<1$, you have $[x]=0$ so $\tfrac{\sin x}{x}$ is not defined in neighborhoods on the right of $x=0$.</p>
Answer 14: <p>The domain of the function excludes $[0,1)$, so that</p>
<p>$$\lim_{x\to0}f(x)=\lim_{x\to0^-}f(x)=-\sin(-1).$$</p>
|
Is there a relation between the connected components of a Lie group? | 2,101,346 | 5 | https://math.stackexchange.com/questions/2101346/is-there-a-relation-between-the-connected-components-of-a-lie-group | <p>Let $G_0$ be the identity component of a (compact) Lie group $G$ and $G_1$ one of its connected components. Is there any relation between $G_0$ and $G_1$? For example, is there a $g\in{G_1}$ such that $L_g\left(G_0\right)=G_1$? Here $L_g$ is the left-translation-by-$g$ map. Also, let's consider the map $\varphi\equiv{L_g}\circ\exp\colon\mathfrak{g}\to{G_1}$. Is this map injective or surjective?</p>
| [
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
},
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
},
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
},
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
},
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
},
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
},
{
"answer_id": 2101357,
"score": 6,
"text": "<p>All the connected components are diffeomorphic to $G_0$. Choose some $g \\in G_1$. Then $L_g : G \\to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \\in G_1$, you get $L_g(G_0) \\subset G_1$. Similarly $L_{g^{-1}}(G_1) \\subset G_0$. The two maps $L_g : G_0 \\to G_1$ and $L_{g^{-1}} : G_1 \\to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \\cong G_1$.</p>\n"
},
{
"answer_id": 2101361,
"score": 2,
"text": "<p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book \"Lie groups beyond an introduction\"). So $\\exp:\\mathfrak{g}\\to G_0$ is surjective and hence $L_g\\circ\\exp:\\mathfrak{g}\\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>\n"
}
] | Question: Is there a relation between the connected components of a Lie group?
Answer 1: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 2: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
Answer 3: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 4: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
Answer 5: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 6: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
Answer 7: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 8: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
Answer 9: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 10: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
Answer 11: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 12: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
Answer 13: <p>All the connected components are diffeomorphic to $G_0$. Choose some $g \in G_1$. Then $L_g : G \to G$ is continuous, therefore $L_g(G_0)$ is connected; since it contains $g = L_g(e) \in G_1$, you get $L_g(G_0) \subset G_1$. Similarly $L_{g^{-1}}(G_1) \subset G_0$. The two maps $L_g : G_0 \to G_1$ and $L_{g^{-1}} : G_1 \to G_0$ are in fact inverse to each other, and they're both differentiable, so you get a diffeomorphism $G_0 \cong G_1$.</p>
Answer 14: <p>For the second question: if $G$ is compact, then so is $G_0$ and hence also $G_1$ (using that $G_0\cong G_1$ as Najib Idrissi showed). The exponential map of a compact connected Lie group is surjective (but that is difficult to show, see for example Knapp's book "Lie groups beyond an introduction"). So $\exp:\mathfrak{g}\to G_0$ is surjective and hence $L_g\circ\exp:\mathfrak{g}\to G_1$ is surjective. But it is not injective in general. (Take for example $G=S^1$).</p>
|
Formula for midpoint in hyperbolic 3-space | 2,101,606 | 5 | https://math.stackexchange.com/questions/2101606/formula-for-midpoint-in-hyperbolic-3-space | <p>Consider the hyperbolic 3-space $(\mathbb{H}^3,ds^2)$ with</p>
<p>$$\mathbb{H}^3:=\{(x,y,z)\in\mathbb{R}^3|z>0\}, \quad ds^2=\frac{dx^2+dy^2+dz^2}{z^2}$$</p>
<p>Geodesics for this space are circular arcs normal to $\{z=0\}$ and vertical rays normal to $\{z=0\}$.</p>
<p>Given any two points, for example $p1=(2,-1,3),p_2=(1,2,4)$ (I've chosen them so they don't lie on a vertical ray) <strong>is it possible to obtain a closed formula for the coordinates of the midpoint $m$ of $p_1$ and $p_2$?</strong> (i.e. the point $m$ is such that $d(m,p_1)=d(m,p_2)=d(p_1,p_2)/2$ where $d$ is the metric induced by $ds^2$)</p>
<p>I just can think of this method: given $p_1$ and $p_2$ find the plane $H$ orthogonal to $\{z=0\}$ and which contains both $p_1$ and $p_2$. Then in $H$ find the circular arc $g:[0,d(p_1,p_2)]\rightarrow H$ orthogonal to $\{z=0\}$ with $g(0)=p_1$, $g(1)=p_2$ and $|\dot g(t)|_{ds^2}=1$. Finally $m=g(d(p_1,p_2)/2)$.</p>
<p>But my method is rather long and complicated. Do you know of a better one which could be written directly as a formula? I'm interested in the case of $(\mathbb{H}^3,ds^2)$ to find a method which could be generalized for other riemannian and metric spaces.</p>
| [
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
},
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
},
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
},
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
},
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
},
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
},
{
"answer_id": 2138748,
"score": 4,
"text": "<p>A clean way to do this is to work with the <a href=\"https://en.wikipedia.org/wiki/Hyperboloid_model\" rel=\"nofollow noreferrer\">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\\subset R^{1,n}$, their midpoint is \n$$\\frac{p+q}{\\sqrt{<p+q, p+q>}}.$$\n If you know <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model\" rel=\"nofollow noreferrer\">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href=\"https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1\">this question</a>. </p>\n"
},
{
"answer_id": 2102020,
"score": 1,
"text": "<p><strong>Hint</strong> </p>\n\n<ul>\n<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>\n$$(x-a)^2+(y-b)^2+z^2=r^2$$ \nThe vectors $u=pp_1$ and $v=pp_2$ satisfy\n$$(u\\times v).k=0$$\nUsing these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>\n<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>\n</ul>\n"
}
] | Question: Formula for midpoint in hyperbolic 3-space
Answer 1: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 2: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
Answer 3: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 4: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
Answer 5: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 6: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
Answer 7: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 8: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
Answer 9: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 10: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
Answer 11: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 12: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
Answer 13: <p>A clean way to do this is to work with the <a href="https://en.wikipedia.org/wiki/Hyperboloid_model" rel="nofollow noreferrer">hyperboloid model</a> of the hyperbolic space where $H^n$. Then, given two points $p, q$ in the (upper) hyperboloid $H^n\subset R^{1,n}$, their midpoint is
$$\frac{p+q}{\sqrt{<p+q, p+q>}}.$$
If you know <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model#Relation_to_the_hyperboloid_model" rel="nofollow noreferrer">how to go back and forth between the upper half space and the hyperboloid model</a>, you can translate this formula into the upper half space model. (I think it will not be pretty.) See also <a href="https://math.stackexchange.com/questions/499042/how-to-convert-between-the-hyperboloid-model-and-the-poincare-patch-for-mathbb?rq=1">this question</a>. </p>
Answer 14: <p><strong>Hint</strong> </p>
<ul>
<li>Assume that the geodesic passing through the given points $p_1$ and $p_2$ is the circle with center at $p=(a,b,0)$ and radius $r$. Then<br>
$$(x-a)^2+(y-b)^2+z^2=r^2$$
The vectors $u=pp_1$ and $v=pp_2$ satisfy
$$(u\times v).k=0$$
Using these two equations, you can evaluate $a,b$ and $r$. I think(I am not sure) the rest is easy. </li>
<li>I remembered that the Euclidean bisector of the segment $p_1p_2$ intersects the plane $z=0$ at the center of the required geodesic.</li>
</ul>
|
2016 AMC 12A #24 | 2,101,712 | 5 | https://math.stackexchange.com/questions/2101712/2016-amc-12a-24 | <p>Problem 24 on the 2016 AMC 12A asks the following:</p>
<blockquote>
<p>There is a smallest positive real number $a$ such that there exists a positive real number $b$ such that all the roots of the polynomial $x^3-ax^2+bx-a$ are real. In fact, for this value of $a$ the value of $b$ is unique. What is this value of $b$?</p>
</blockquote>
<p>On AoPS, Solution 1 insists that the second derivative of the polynomial must be 0 at the x-intercept, and Solution 3 insists that the roots must be identical. How do they know either of these things? </p>
<p>Thanks.</p>
<p><a href="https://www.artofproblemsolving.com/wiki/index.php/2016_AMC_12A_Problems/Problem_24" rel="nofollow noreferrer">https://www.artofproblemsolving.com/wiki/index.php/2016_AMC_12A_Problems/Problem_24</a></p>
<p>EDIT: I understand why the roots must be equal, but it's still not clear to me why the second derivative must be 0 at the x-intercept. </p>
| [
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
},
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
},
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
},
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
},
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
},
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
},
{
"answer_id": 2101827,
"score": 1,
"text": "<p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :\n$$\nx_1+x_2+x_3=x_1x_2x_3=a\n$$\nFrom the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :\n$$\nx_1+x_2+x_3 > 0\n\\\\\nx_1x_2x_3>0\n\\\\\nx_1x_2+x_2x_3+x_1x_3=b>0\n$$\nAs per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>\n\n<p>$$\nx_1+x_2+x_3 >0\n\\\\\nx_1+x_2>-x_3\n\\\\\n\\lvert x_1 + x_2 \\rvert < x_3\n$$\nWe can see that :\n$$\n\\frac{\\lvert x_1+x_2 \\rvert}{2} \\ge \\sqrt{x_1x_2}\n$$\nAs $x_1$ and $x_2$ are both negative. \nSquaring :\n$$\n\\frac{(x_1+x_2)^2}{4} \\ge x_1x_2\n$$\nNow, \n$$\nx_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)\n$$\nIf this is positive. Then :\n$$\nx_1x_2>x_3 \\lvert x_1+x_2 \\rvert\n$$\nBut\n$$\nx_1x_2 \\le \\frac{(x_1+x_2)^2}{4}\n$$\nHence :\n$$\nx_3 \\lvert x_1 + x_2 \\rvert < \\frac{(x_1+x_2)^2}{4}\n\\\\\nx_3 < \\frac{\\lvert x_1 + x_2 \\rvert}{ 4}\n$$\nwhich is impossible as<br>\n$$\n\\lvert x_1 + x_2 \\rvert < x_3\n$$</p>\n\n<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>\n\n<p>This has been proved earlier. Hence. this equation has three positive real roots. \nApplying the AM-GM inequality :</p>\n\n<p>$$\n\\frac{x_1+x_2+x_3}{3} \\ge \\sqrt[3]{x_1x_2x_3}\n\\\\\n\\frac{a}{3} \\ge \\sqrt[3]a\n\\\\\na \\ge 3\\sqrt{3}\n\\\\\n\\frac{x_1+x_2+x_3}{3} = \\sqrt{3}= \\sqrt[3]{x_1x_2x_3}\n\\implies AM=GM\n$$\nHence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>\n"
}
] | Question: 2016 AMC 12A #24
Answer 1: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
Answer 2: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
Answer 3: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
Answer 4: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
Answer 5: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
Answer 6: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
Answer 7: <p>If the roots of the equation are $x_1$, $x_2$ and $x_3$ :
$$
x_1+x_2+x_3=x_1x_2x_3=a
$$
From the sum of roots and product of roots condition. This is sufficient to show that $x_1$, $x_2$, $x_3$ are positive as :
$$
x_1+x_2+x_3 > 0
\\
x_1x_2x_3>0
\\
x_1x_2+x_2x_3+x_1x_3=b>0
$$
As per Descartes Rule of Signs, either all three roots are positive or only one of them is positive. Let's suppose that $x_3$ is the positive root. </p>
<p>$$
x_1+x_2+x_3 >0
\\
x_1+x_2>-x_3
\\
\lvert x_1 + x_2 \rvert < x_3
$$
We can see that :
$$
\frac{\lvert x_1+x_2 \rvert}{2} \ge \sqrt{x_1x_2}
$$
As $x_1$ and $x_2$ are both negative.
Squaring :
$$
\frac{(x_1+x_2)^2}{4} \ge x_1x_2
$$
Now,
$$
x_1x_2+x_2x_3+x_1x_3=x_1x_2+x_3(x_1+x_2)
$$
If this is positive. Then :
$$
x_1x_2>x_3 \lvert x_1+x_2 \rvert
$$
But
$$
x_1x_2 \le \frac{(x_1+x_2)^2}{4}
$$
Hence :
$$
x_3 \lvert x_1 + x_2 \rvert < \frac{(x_1+x_2)^2}{4}
\\
x_3 < \frac{\lvert x_1 + x_2 \rvert}{ 4}
$$
which is impossible as<br>
$$
\lvert x_1 + x_2 \rvert < x_3
$$</p>
<p>EDIT : It appears that there is a much easier way to prove that all three roots are positive. The credit for this goes to @dxiv. Since the quadratic equation is $x^3-ax^2+bx-a$, if $x<0$, then the expression always evaluates to a negative value since $a,b$ are positive. In this case, there won't be any roots to the equation which proves that if there are roots to the equation, then they have to be positive. </p>
<p>This has been proved earlier. Hence. this equation has three positive real roots.
Applying the AM-GM inequality :</p>
<p>$$
\frac{x_1+x_2+x_3}{3} \ge \sqrt[3]{x_1x_2x_3}
\\
\frac{a}{3} \ge \sqrt[3]a
\\
a \ge 3\sqrt{3}
\\
\frac{x_1+x_2+x_3}{3} = \sqrt{3}= \sqrt[3]{x_1x_2x_3}
\implies AM=GM
$$
Hence $x_1$, $x_2$ and $x_3$ are equal as the geometric mean of three numbers is equal to their arithmetic mean only if all three are equal. </p>
|
On average, how many times will I need to roll a six-sided die before I see ten ONES in total? | 2,101,871 | 5 | https://math.stackexchange.com/questions/2101871/on-average-how-many-times-will-i-need-to-roll-a-six-sided-die-before-i-see-ten | <p>On average, how many times will I need to roll a six-sided die before I see ten ONES in total?</p>
<p>I'd like to be able to extend my knowledge to answer other questions in the same form, such as, "If only 15% of fish are within the legal size limit, how many fish should I need to catch before I get five keepers?</p>
<p>Thanks kindly</p>
| [
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
},
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
},
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
},
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
},
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
},
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
},
{
"answer_id": 2101895,
"score": 2,
"text": "<p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>\n\n<p>This is just intuition. Making the \"on average\" mathematically rigorous takes some work.</p>\n"
},
{
"answer_id": 2101936,
"score": 2,
"text": "<p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \\times 6 = 60$.</p>\n"
}
] | Question: On average, how many times will I need to roll a six-sided die before I see ten ONES in total?
Answer 1: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 2: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
Answer 3: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 4: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
Answer 5: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 6: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
Answer 7: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 8: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
Answer 9: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 10: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
Answer 11: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 12: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
Answer 13: <p>If you roll a 6-sided die 6 times, on average you'd get each number one time. 12 times (6*2) and you'd get each number 2 times. Roll it 60 times (6*10), and you'd get each number 10 times on average. </p>
<p>This is just intuition. Making the "on average" mathematically rigorous takes some work.</p>
Answer 14: <p>Expected value is additive, so the expected number of times you need to roll the die to see 10 ones is 10 times the expected value of times you must roll the die to get a one. If after performing an experiment the probability of an event is $p$, the expected value of repetitions of the experiment needed for the event to occur is the inverse of the probability $1/p$. The probability of getting a one after rolling a die is $1/6$, so the expected number of times you need to roll a die to see a one is 6. Because of the additive property mentioned above, the expected number of times you need to roll the die to get 10 ones is $10 \times 6 = 60$.</p>
|
Sum the series: $1+\frac{1+3}{2!}+\frac{1+3+3^2}{3!}+\cdots$ | 2,101,962 | 5 | https://math.stackexchange.com/questions/2101962/sum-the-series-1-frac132-frac13323-cdots | <p>We have the series$$1+\frac{1+3}{2!}+\frac{1+3+3^2}{3!}+\cdots$$How can we find the sum$?$</p>
<p>MY TRY: $n$th term of the series i.e $T_n=\frac{3^0+3^1+3^2+\cdots+3^n}{(n+1)!}$. I don't know how to proceed further. Thank you.</p>
| [
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
},
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
},
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
},
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
},
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
},
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
},
{
"answer_id": 2101974,
"score": 6,
"text": "<p>The numerator is a geometric sum that evaluates to,</p>\n\n<p>$$\\frac{3^{n+1}-1}{3-1}$$</p>\n\n<p>Hence what we have is,</p>\n\n<p>$$\\frac{1}{2} \\sum_{n=0}^{\\infty} \\frac{(3^{n+1}-1)}{(n+1)!}$$</p>\n\n<p>$$=\\frac{1}{2} \\sum_{n=1}^{\\infty} \\frac{3^n-1}{n!}$$</p>\n\n<p>$$=\\frac{1}{2} \\left( \\sum_{n=1}^{\\infty} \\frac{3^n}{n!}- \\sum_{n=1}^{\\infty} \\frac{1^n}{n!} \\right)$$</p>\n\n<p>Recognizing the Taylor series of $e^x$ we have</p>\n\n<p>$$=\\frac{1}{2}((e^3-1)-(e-1))$$</p>\n"
},
{
"answer_id": 2101987,
"score": 2,
"text": "<p>The general term is</p>\n\n<p>$$T_n={1+3+\\cdots +3^n\\over n!}={3^{n+1}-1\\over 2}{1\\over (n+1)!}$$</p>\n\n<p>Now we can rewrite the series as</p>\n\n<p>$$S_n={1\\over 2}\\sum_{n=0}^\\infty {3^{n+1}\\over (n+1)!}-{1\\over 2}\\sum_{n=0}^\\infty {1\\over (n+1!}={1\\over 2}\\left(e^3-1\\right)-{1\\over 2}(e-1)={1\\over 2}\\left(e^3-e\\right)$$</p>\n"
}
] | Question: Sum the series: $1+\frac{1+3}{2!}+\frac{1+3+3^2}{3!}+\cdots$
Answer 1: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 2: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
Answer 3: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 4: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
Answer 5: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 6: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
Answer 7: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 8: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
Answer 9: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 10: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
Answer 11: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 12: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
Answer 13: <p>The numerator is a geometric sum that evaluates to,</p>
<p>$$\frac{3^{n+1}-1}{3-1}$$</p>
<p>Hence what we have is,</p>
<p>$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$</p>
<p>$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$</p>
<p>$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$</p>
<p>Recognizing the Taylor series of $e^x$ we have</p>
<p>$$=\frac{1}{2}((e^3-1)-(e-1))$$</p>
Answer 14: <p>The general term is</p>
<p>$$T_n={1+3+\cdots +3^n\over n!}={3^{n+1}-1\over 2}{1\over (n+1)!}$$</p>
<p>Now we can rewrite the series as</p>
<p>$$S_n={1\over 2}\sum_{n=0}^\infty {3^{n+1}\over (n+1)!}-{1\over 2}\sum_{n=0}^\infty {1\over (n+1!}={1\over 2}\left(e^3-1\right)-{1\over 2}(e-1)={1\over 2}\left(e^3-e\right)$$</p>
|
Finding points of self intersection | 2,102,121 | 5 | https://math.stackexchange.com/questions/2102121/finding-points-of-self-intersection | <p>A curve is parameterised by $$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)(\sin 2t)), t\in [0,2\pi ]. $$
How can I find the self intersection points? I set $\mathbf{r}(t_1)=\mathbf{r}(t_2)$ and then tried to solve the simultaneous equations but it doesn't seem to work out. Any pointers?</p>
| [
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
},
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
},
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
},
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
},
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
},
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
},
{
"answer_id": 2102170,
"score": 1,
"text": "<p><strong>Geometrically</strong></p>\n\n<p>Mathematica shows</p>\n\n<p><a href=\"https://i.stack.imgur.com/5GyQ3.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5GyQ3.png\" alt=\"enter image description here\"></a></p>\n\n<p>Three point are\n$$(1,1.7),(-2,0),(1,-1.7)$$</p>\n\n<p><strong>Analytically</strong></p>\n\n<p>(deleted)</p>\n"
},
{
"answer_id": 2102312,
"score": 1,
"text": "<p>Parametric representation </p>\n\n<p>$$\\mathbf{r}(t)=((2+\\cos 3t)\\cos 2t, (2+\\cos 3t)\\sin 2t)), t\\in [0,2\\pi ]$$</p>\n\n<p>can be replaced by this (complex) one</p>\n\n<p>$$\\mathbf{r}(t)=(2+\\cos 3t)e^{2it} , t\\in [0,2\\pi ]$$</p>\n\n<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \\neq t_2$, such that:</p>\n\n<p>$$\\tag{1}(2+\\cos 3t_1)e^{2it_1}=(2+\\cos 3t_2)e^{2it_2}$$</p>\n\n<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \\pi$).</p>\n\n<p>Remark:\nDue to the fact that $2+\\cos 3t>0$ for any value of $t$ shows that $2+\\cos 3t_1$ and $2+\\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>\n\n<p>Thus, (1) is equivalent to:</p>\n\n<p>$$\\cases{\\cos(3t_1)=\\cos(3 t_2)\\\\2t_1=2t_2 \\ modulo \\ 2 \\pi}$$</p>\n\n<p>As $$cos(u)=cos(v) \\ \\iff \\ u=\\pm v+K 2\\pi:$$</p>\n\n<p>the previous conditions are equivalent to</p>\n\n<p>$$\\cases{3t_1= s 3 t_2+K 2\\pi\\\\2t_1=2t_2+K' 2 \\pi}$$</p>\n\n<p>where $s=\\pm1$ and $K,K'$ are integers.</p>\n\n<p>$$\\tag{2}\\iff \\ \\ \\cases{t_1= s t_2+K 2\\pi/3\\\\t_1=t_2+K' \\pi}$$</p>\n\n<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\\pi$). We can thus assume $s=-1$, i.e.,</p>\n\n<p>$$\\tag{3}\\iff \\ \\ \\cases{t_1= -t_2+K 2\\pi/3 \\ \\ (a)\\\\t_1=t_2+K' \\pi \\ \\ (b)}$$</p>\n\n<p>Adding and substracting (3)(a) and (3)(b):</p>\n\n<p>$$\\tag{4}\\cases{2t_1=K2\\pi/3+K'\\pi \\ \\ \\iff \\ \\ t_1=K\\pi/3+K'\\pi/2\\ \\ (a)\\\\2t_2=K2\\pi/3-K'\\pi \\ \\ \\iff \\ \\ t_2=K\\pi/3-K'\\pi/2 \\ \\ (b)}$$</p>\n\n<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>\n\n<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\\pi/6,\\pi/6)$, giving point $(1,\\sqrt{3}).$</p>\n\n<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>\n\n<p>$$(t_1,t_2)=(7\\pi/6,\\pi/6), \\ \\ (5\\pi/6,11\\pi/6), \\ \\ \\ \\ (\\pi/2,3\\pi/2), $$ yielding double points :</p>\n\n<p>$$(1,\\sqrt{3}), \\ \\ (1,-\\sqrt{3}), \\ \\ (-2,0)$$</p>\n\n<p>respectively.</p>\n"
}
] | Question: Finding points of self intersection
Answer 1: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 2: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
Answer 3: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 4: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
Answer 5: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 6: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
Answer 7: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 8: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
Answer 9: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 10: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
Answer 11: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 12: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
Answer 13: <p><strong>Geometrically</strong></p>
<p>Mathematica shows</p>
<p><a href="https://i.stack.imgur.com/5GyQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GyQ3.png" alt="enter image description here"></a></p>
<p>Three point are
$$(1,1.7),(-2,0),(1,-1.7)$$</p>
<p><strong>Analytically</strong></p>
<p>(deleted)</p>
Answer 14: <p>Parametric representation </p>
<p>$$\mathbf{r}(t)=((2+\cos 3t)\cos 2t, (2+\cos 3t)\sin 2t)), t\in [0,2\pi ]$$</p>
<p>can be replaced by this (complex) one</p>
<p>$$\mathbf{r}(t)=(2+\cos 3t)e^{2it} , t\in [0,2\pi ]$$</p>
<p>Thus, we are looking for values of $t_1$ and $t_2$, $t_1 \neq t_2$, such that:</p>
<p>$$\tag{1}(2+\cos 3t_1)e^{2it_1}=(2+\cos 3t_2)e^{2it_2}$$</p>
<p>Two complex numbers are equal if and only if their modules are equal and their arguments are equal (modulo $2 \pi$).</p>
<p>Remark:
Due to the fact that $2+\cos 3t>0$ for any value of $t$ shows that $2+\cos 3t_1$ and $2+\cos 3t_2$ are the modules of the LHS and RHS of (1), resp.</p>
<p>Thus, (1) is equivalent to:</p>
<p>$$\cases{\cos(3t_1)=\cos(3 t_2)\\2t_1=2t_2 \ modulo \ 2 \pi}$$</p>
<p>As $$cos(u)=cos(v) \ \iff \ u=\pm v+K 2\pi:$$</p>
<p>the previous conditions are equivalent to</p>
<p>$$\cases{3t_1= s 3 t_2+K 2\pi\\2t_1=2t_2+K' 2 \pi}$$</p>
<p>where $s=\pm1$ and $K,K'$ are integers.</p>
<p>$$\tag{2}\iff \ \ \cases{t_1= s t_2+K 2\pi/3\\t_1=t_2+K' \pi}$$</p>
<p>The cases where $s=1$ will not give double points (because they lead to $t_1=t_2+K''2\pi$). We can thus assume $s=-1$, i.e.,</p>
<p>$$\tag{3}\iff \ \ \cases{t_1= -t_2+K 2\pi/3 \ \ (a)\\t_1=t_2+K' \pi \ \ (b)}$$</p>
<p>Adding and substracting (3)(a) and (3)(b):</p>
<p>$$\tag{4}\cases{2t_1=K2\pi/3+K'\pi \ \ \iff \ \ t_1=K\pi/3+K'\pi/2\ \ (a)\\2t_2=K2\pi/3-K'\pi \ \ \iff \ \ t_2=K\pi/3-K'\pi/2 \ \ (b)}$$</p>
<p><strong>with the same values of $K$ and $K'$ in (4)(a) and (4)(b).</strong></p>
<p>For example, if $K=2$ and $K'=1$, $(t_1,t_2)=(7\pi/6,\pi/6)$, giving point $(1,\sqrt{3}).$</p>
<p>I leave you the task to consider the different other cases (by taking different possible cases for integers $K$ and $K'$, some of them being redundant). You will end up with the three following solutions (up to an exchange between $t_1$ and $t_2$, of course):</p>
<p>$$(t_1,t_2)=(7\pi/6,\pi/6), \ \ (5\pi/6,11\pi/6), \ \ \ \ (\pi/2,3\pi/2), $$ yielding double points :</p>
<p>$$(1,\sqrt{3}), \ \ (1,-\sqrt{3}), \ \ (-2,0)$$</p>
<p>respectively.</p>
|
Orthogonal group acts on symmetric matrices | 2,102,150 | 5 | https://math.stackexchange.com/questions/2102150/orthogonal-group-acts-on-symmetric-matrices | <p>Let $O_n(\mathbb{R})$ act on $Sym_n(\mathbb{R})$, the symmetric matrices with real entries, via $S \mapsto A^T SA$ for $A \in O_2(\mathbb{R})$ and $Sym_n(\mathbb{R})$. What is the space of orbits $O_n(\mathbb{R})/Sym_n(\mathbb{R})$ as a set (and what is a basis for the topology)?</p>
<p>I know that we can diagonalize a symmetric matrix $A$ with $Q\in O_n(\mathbb{R})$ such that $QSQ^{-1}$ is diagnonal but I don't know how to continue. Thanks a lot for your help!</p>
| [
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
},
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
},
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
},
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
},
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
},
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
},
{
"answer_id": 2102173,
"score": 3,
"text": "<p>Each orbit contains a diagonal matrix of the form\n$$\n\\pmatrix{\\lambda_1\\\\&\\lambda_2}\n$$\nwith $\\lambda_1 \\geq \\lambda_2$. This gives us a homeomorphism between $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ and $\\Bbb R/ [(x,y) \\sim (y,x)]$. That is, $Sym_2(\\mathbb{R})/O_2(\\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>\n\n<hr>\n\n<p>We have the following basis for $\\Bbb R/ [(x,y) \\sim (y,x)]$: for every $(x,y)$ with $x \\neq y$, take every neighborhood of $(x,y)$ in $\\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>\n"
},
{
"answer_id": 2102162,
"score": 2,
"text": "<p>Every orbit contains exactly one diagonal matrix $$\\begin{pmatrix} a & 0 \\\\ 0 & b\\end{pmatrix},$$ with $a\\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\\in \\mathbb{R}^2,$ with $x\\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}$ has determinant $-1.$</p>\n"
}
] | Question: Orthogonal group acts on symmetric matrices
Answer 1: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 2: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
Answer 3: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 4: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
Answer 5: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 6: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
Answer 7: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 8: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
Answer 9: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 10: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
Answer 11: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 12: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
Answer 13: <p>Each orbit contains a diagonal matrix of the form
$$
\pmatrix{\lambda_1\\&\lambda_2}
$$
with $\lambda_1 \geq \lambda_2$. This gives us a homeomorphism between $Sym_2(\mathbb{R})/O_2(\Bbb R)$ and $\Bbb R/ [(x,y) \sim (y,x)]$. That is, $Sym_2(\mathbb{R})/O_2(\Bbb R)$ is homeomorphic to the set of unordered real pairs.</p>
<hr>
<p>We have the following basis for $\Bbb R/ [(x,y) \sim (y,x)]$: for every $(x,y)$ with $x \neq y$, take every neighborhood of $(x,y)$ in $\Bbb R$ which is small enough to miss the line $y = x$. For every point $(x,x)$, consider the usual neighborhoods, but exclude any points $(x,y)$ for which $x < y$.</p>
Answer 14: <p>Every orbit contains exactly one diagonal matrix $$\begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix},$$ with $a\geq b,$ so the quotient space is exactly the set of pairs $(x, y)\in \mathbb{R}^2,$ with $x\geq y.$ Notice that the answer would be different for quotient by $SO(2),$ since then you can't sort, because the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has determinant $-1.$</p>
|
Evaluating continued fractions | 2,102,309 | 5 | https://math.stackexchange.com/questions/2102309/evaluating-continued-fractions | <p>I am writing a program which will compute $\exp(z)$.</p>
<p>Originally I used the Taylor series, which worked fine. However, continued fractions can converge more quickly than power series, so I decided to go that route.</p>
<p>I found this continued fraction in multiple places. It's from <em>The Application of Continued Fractions and Their Generalizations to Problems in Approximation Theory</em> by A. N. Khovanskii (1963), pg 114.</p>
<p>$${e^{z}=1+{\cfrac {2z}{2-z+{\cfrac {z^{2}}{6+{\cfrac {z^{2}}{10+{\cfrac {z^{2}}{14+\ddots }}}}}}}}}$$</p>
<p>It can be represented as</p>
<p>$${e^{z}=1+\cfrac{2z}{2-z +}\cfrac{z^2/6}{1 +}\sum_{m=3}^{\infty}\left({\cfrac{{a_m}^{z^2}}{1}}\right)}$$</p>
<p>(Sorry, first time using Mathjax.)</p>
<p>Annie A.M. Cuyt, Vigdis Petersen, Brigitte Verdonk, Haakon Waadeland, and William B. Jones. 2008. <em>Handbook of Continued Fractions for Special Functions</em> (1 ed.). Springer Publishing Company, Incorporated, pg 194.</p>
<p>I'm computing the continued fraction using the modified Lentz algorithm from <em>Numerical Recipes in C: The Art of Scientific Computing</em> (ISBN 0-521-43105-5), pg 171.</p>
<p><strong>The issue:</strong> it only works for a small set of numbers. (From what I can tell, [-30, 30].)</p>
<p>So, my question: is this expected? I'm relatively new to continued fractions, so while I <em>think</em> I grasp them, I'm not entirely sure.</p>
<p>Given a "generator", C++'s boost library can compute continued fractions. Essentially each call to the "generator" returns the next term in the CF.</p>
<p>Here's what I used (where $z$ is the input and $m$ is the current term index):</p>
<ul>
<li>Term 0:
<ul>
<li>A: $0$</li>
<li>B: $1$</li>
</ul></li>
<li>Term 1:
<ul>
<li>A: $2z$</li>
<li>B: $2-z$</li>
</ul></li>
<li>Term 2:
<ul>
<li>A: $z^2$</li>
<li>B: $1$</li>
</ul></li>
<li>Term 3..N:
<ul>
<li>A: $1 / (4 * (2m - 3) * (2m - 1))) * z^2$</li>
<li>B: $1$</li>
</ul></li>
</ul>
<p>Given $z = 38.5$, the Lentz algorithm provides the following (each line is $f_j$):</p>
<p><code>-1.1095890410958904
1.3657233326736593
-1.8636200592602417
2.81700268519061
-4.711808282946167
8.70960008496205
-17.765408200530924
39.92118587023142
-98.65273729066934
267.59129323043885
-795.1270569517195
2582.9903072474267
-9154.51726459279
35324.19248620738
-148091.74385797494
673154.6572372171
-3.310843427214524e+06
1.758455218773504e+07
-1.0065603099002995e+08
6.197709770714103e+08
-4.097272719854685e+09
2.9029669729927063e+10
-2.2004739456295242e+11
1.7812408854416082e+12
-1.5389163725064727e+13
1.40311430366499e+14
-1.483409360122725e+15
8.317237533312957e+15
2.273093139324771e+16
1.964661705307002e+16
1.9877542316211204e+16
1.985862955594415e+16
1.9860080303623144e+16
1.985997463174032e+16
1.9859981941770148e+16
1.9859981460940616e+16
1.9859981491045844e+16
1.9859981489249772e+16
1.9859981489351984e+16
1.9859981489346428e+16
1.9859981489346716e+16
1.98599814893467e+16
</code></p>
<p>So, it converges at $1.98599814893467e+16$ when the actual answer is supposed to be: $\approx 52521552285925160$</p>
| [
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
},
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
},
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
},
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
},
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
},
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
},
{
"answer_id": 2106200,
"score": 4,
"text": "<p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>\n\n<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href=\"https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula\" rel=\"nofollow noreferrer\">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>\n\n<p>For instance, calculating at a $16$-digit precision, for $z=15\\,$ we have a good approximation (absolute error $0.0004\\,\\,\\,$), but for $z=20\\,$ the underestimation error is $\\approx 4.9\\,$, raising to $\\approx 208585\\,\\,\\,$ for $z=25\\,$ and to $\\approx 9218010394\\,\\,\\,$ for $z=30\\,$. </p>\n\n<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href=\"http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0\" rel=\"nofollow noreferrer\">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\\,$, $z=25\\,$, and $z=30\\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>\n"
}
] | Question: Evaluating continued fractions
Answer 1: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
Answer 2: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
Answer 3: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
Answer 4: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
Answer 5: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
Answer 6: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
Answer 7: <p>This seems to be a computational problem. In particular, it may be a round-off representation error, which typically occurs when we attempt to represent a number using a finite string of digits. In this case, rounding errors even in the decimal digits of the first numbers in the progression shown in the OP finally results in a large error of convergence.</p>
<p>The continued fraction is correct and works well for any $z $ (also note that it is only a variation of other <a href="https://en.m.wikipedia.org/wiki/Euler's_continued_fraction_formula" rel="nofollow noreferrer">Euler's continued fractions giving $e^z $</a>). However, using standard calculation tools that work with 16 digits of precision, it provides a very good approximation only until about $z=15$. For higher values of $z $, it begins to significantly underestimate the true value of $e^z$. Notably, this error does not change if we increase the length of the continued fraction (i.e. increasing $m $), supporting the hypothesis of its round-off nature. </p>
<p>For instance, calculating at a $16$-digit precision, for $z=15\,$ we have a good approximation (absolute error $0.0004\,\,\,$), but for $z=20\,$ the underestimation error is $\approx 4.9\,$, raising to $\approx 208585\,\,\,$ for $z=25\,$ and to $\approx 9218010394\,\,\,$ for $z=30\,$. </p>
<p>Interestingly, in these first values, the magnitude of the absolute error seems to growth exponentially: see for example <a href="http://m.wolframalpha.com/input/?i=linear%20fit%20%28%7B20%2Clog%20%285%29%7D%2C%7B25%2Clog%20%28208585%29%7D%2C%7B30%2Clog%20%289218010394%29%7D%29&x=0&y=0" rel="nofollow noreferrer">this linear fitting by WA</a> for the relation between $z $ and the log of the absolute underestimation error, calculated for the values $z=20\,$, $z=25\,$, and $z=30\,$ (further analysis of the logarithm of the error for higher values of $z $, however, seems to suggest a slightly lower growth rate).</p>
|
Existence of $\alpha\in \Bbb C$ such that $f(\alpha)=f(\alpha+1)=0$. | 2,102,688 | 5 | https://math.stackexchange.com/questions/2102688/existence-of-alpha-in-bbb-c-such-that-f-alpha-f-alpha1-0 | <blockquote>
<p>Let <span class="math-container">$f(x)\in \Bbb Q[x]$</span> be an irreducible polynomial over <span class="math-container">$\Bbb Q$</span>.</p>
<p>Show that there exists no complex number <span class="math-container">$\alpha$</span> such that <span class="math-container">$f(\alpha)=f(\alpha+1)=0$</span>.</p>
</blockquote>
<p>Following @quasi;</p>
<p>Let <span class="math-container">$f(x)=a_0+a_1x+a_2x^2+\dots +a_nx^n$</span>.</p>
<p>Define <span class="math-container">$g(x)=f(x+1)-f(x)$</span></p>
<p>Then <span class="math-container">$g(\alpha)=0$</span>.Also <span class="math-container">$g(\bar \alpha)=0\implies g(x)=(x-\alpha)^a(x-\bar \alpha)^b h(x);h(x)\in \Bbb C[x]$</span>
.</p>
<p>What to do now?Please help.</p>
| [
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
},
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
},
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
},
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
},
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
},
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
},
{
"answer_id": 2102800,
"score": 5,
"text": "<p>Suppose $f$ is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a complex number such that $f(\\alpha)=f(\\alpha+1)=0$.\n<p>\nThe goal is to derive a contradiction.\n<p>\nLet $n = \\text{deg}(f).\\;\\,$Since f is irreducible in $\\mathbb{Q}[x]$ and $\\alpha$ is a root of $f$, there is no nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$ for which $\\alpha$ is a root.\n<p>\nLet $g(x) = f(x + 1) - f(x)$.\n<p>\nIf $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.\n<p>\nNote that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\\text{deg}(g) < n$.\n<p>\n<p>\nSince $f(\\alpha)=f(\\alpha+1)=0$, $\\alpha$ is a root of $g$.\n<p>\nBut then $g$ is a nonzero polynomial in $\\mathbb{Q}[x]$ of degree less than $n$, having $\\alpha$ as a root, contradiction.</p>\n"
},
{
"answer_id": 2102794,
"score": 3,
"text": "<p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>\n"
},
{
"answer_id": 2104640,
"score": 3,
"text": "<p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \\dotsc$ and you will obtain infinitely many roots. Contradiction!</p>\n"
}
] | Question: Existence of $\alpha\in \Bbb C$ such that $f(\alpha)=f(\alpha+1)=0$.
Answer 1: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 2: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 3: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
Answer 4: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 5: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 6: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
Answer 7: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 8: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 9: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
Answer 10: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 11: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 12: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
Answer 13: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 14: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 15: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
Answer 16: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 17: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 18: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
Answer 19: <p>Suppose $f$ is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a complex number such that $f(\alpha)=f(\alpha+1)=0$.
<p>
The goal is to derive a contradiction.
<p>
Let $n = \text{deg}(f).\;\,$Since f is irreducible in $\mathbb{Q}[x]$ and $\alpha$ is a root of $f$, there is no nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$ for which $\alpha$ is a root.
<p>
Let $g(x) = f(x + 1) - f(x)$.
<p>
If $g$ is the zero polynomial, then for every root $r$ of $f$, $r+1$ would also be a root, hence $f$ would have infinitely many roots, which is silly -- $f$ is a nonzero polynomial so can't have infinitely many roots.
<p>
Note that $f(x+1)$ has the same leading coefficient as $f(x)$, hence $\text{deg}(g) < n$.
<p>
<p>
Since $f(\alpha)=f(\alpha+1)=0$, $\alpha$ is a root of $g$.
<p>
But then $g$ is a nonzero polynomial in $\mathbb{Q}[x]$ of degree less than $n$, having $\alpha$ as a root, contradiction.</p>
Answer 20: <p>The two polynomials $f(x)$ and $f(x+1)$ have rational coefficients, so their GCD is a polynomial $h(x)$ that also has rational coefficients. Since $f$ is irreducible and $h$ divides $f$, $h$ must either be a nonzero constant or $h=f$. From the hypotheses we get $h(\alpha)=0$, so $h$ cannot be a nonzero constant. So $h=f$. Then $f(x)$ divides $f(x+1)$ ; since those polynomials have the same degree, there is a constant $c$ with $f(x+1)=cf(x)$. Looking at the highest degree monomials, we get $c=1$. So $f(x+1)=f(x)$ which is impossible.</p>
Answer 21: <p>This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!</p>
|
Cofinite topology on $X \times X$ with $X$ an infinite set | 2,102,799 | 5 | https://math.stackexchange.com/questions/2102799/cofinite-topology-on-x-times-x-with-x-an-infinite-set | <p>Let $X$ be an infinite set. And consider $(X \times X)_{cof}$ and $X_{cof} \times X_{cof}$. I can see that $(X \times X)_{cof}$ is <strong>not</strong> finer than $X_{cof} \times X_{cof}$. </p>
<p>MY QUESTION: But is $X_{cof} \times X_{cof}$ finer (hence strictly finer) than $(X \times X)_{cof}$ ?</p>
<p>Specifically, an open set $U$ in $(X \times X)_{cof}$ will be such that $(X\times X)-U$ is finite, meaning here that $(X\times X)-U$ is some finite set of ordered pairs. But I'm struggling to formally show that such a set can be written as an open set in $X_{cof} \times X_{cof}$, i.e., as a union of products of two sets whose complements are finite subsets of $X$. I'd appreciate any help.</p>
| [
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
},
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
},
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
},
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
},
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
},
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
},
{
"answer_id": 2104325,
"score": 2,
"text": "<p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\\in X$, both $\\{x\\}$ and $\\{y\\}$ are closed in $X_{cof}$. By definition of product topology, both $\\{x\\}\\times X$ and $X\\times \\{y\\}$ are closed in $X_{cof}\\times X_{cof}$. Therefore their intersection\n$$(\\{x\\}\\times X)\\cap (X\\times \\{y\\})=\\{(x,y)\\} $$\nis closed. In particular, since finite unions of closed sets are closed, every finite set in $X\\times X$ is closed in $X_{cof}\\times X_{cof}$. This proves that $X_{cof}\\times X_{cof}$ is finer than $(X\\times X)_{cof}$.</p>\n\n<p>It is also strictly finer since for instance $\\{x\\}\\times X$ is closed in $X_{cof}\\times X_{cof}$, but since it is infinite it can not be closed in $(X\\times X)_{cof}$.</p>\n"
}
] | Question: Cofinite topology on $X \times X$ with $X$ an infinite set
Answer 1: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
Answer 2: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
Answer 3: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
Answer 4: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
Answer 5: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
Answer 6: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
Answer 7: <p>Maybe is slightly simpler to think about <em>closed</em> sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.</p>
<p>It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.</p>
|
Is there a change of variables that allows one to calculate $\int_0^\pi \frac{1}{4-3\cos^2 x}\, \mathrm dx$ avoiding improperties? | 2,102,822 | 5 | https://math.stackexchange.com/questions/2102822/is-there-a-change-of-variables-that-allows-one-to-calculate-int-0-pi-frac1 | <p>I'm trying to evaluate this integral:
$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, \mathrm dx$$</p>
<p>It is obvious that this is a standard Riemann integral (without improperties)</p>
<p>BUT</p>
<p>The classical $\tan x=t$ changes of variables introduces 2 improperties: ($x=(\pi/2)^-$ and $x=(\pi/2)^+$).</p>
<p>The other possibility is $\tan(x/2)=t$, but this change introduce an improperty in $x=\pi$.</p>
<p>So, my question is:</p>
<p>Is there any change of variables (or integral method) that allows one to calculate this integral avoiding improperties? That is, only using standard Riemann integrals.</p>
| [
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
},
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
},
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
},
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
},
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
},
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
},
{
"answer_id": 2102875,
"score": 5,
"text": "<p>One approach would be to express the integrand as $$\\frac{1}{4}\\left(\\frac{1}{2-\\sqrt{3}\\cos x} +\\frac{1}{2+\\sqrt{3}\\cos x} \\right) $$ and then note that the integral over $[0,\\pi]$ for both integrands are equal so that the original integral is equal to $$\\frac{1}{2}\\int_{0}^{\\pi}\\frac{dx}{2+\\sqrt{3}\\cos x} $$ Next we put a highly non-obvious substitution $$(2+\\sqrt{3}\\cos x) (2-\\sqrt{3}\\cos y) = 1$$ to reduce the integral to $$\\frac{1}{2}\\int_{0}^{\\pi}dy=\\frac{\\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\\cos x) (a-b\\cos y) =a^{2}-b^{2}$$ yields the equations $$\\sin x=\\frac{\\sqrt{a^{2}-b^{2}}\\sin y} {a-b\\cos y}, \\, \\frac{dx} {a+b\\cos x} = \\frac{dy} {\\sqrt{a^{2}-b^{2}}}$$ so that $$\\int_{0}^{\\pi}\\frac{dx}{a+b\\cos x} =\\frac{\\pi} {\\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>\n"
},
{
"answer_id": 2102844,
"score": 3,
"text": "<p>To avoid improper integrals for the half-angle substitution </p>\n\n<p>$$\\int_0^\\pi \\frac{1}{4-3\\cos^2 x}\\, dx = \\int_0^{\\pi/2} \\frac{1}{4-3\\cos^2 x}\\, dx +\\int_{0}^{\\pi/2} \\frac{1}{4-3\\sin^2 x}\\, dx $$</p>\n\n<p>Then evaluate each one using $t = \\tan(x/2)$</p>\n"
},
{
"answer_id": 2114403,
"score": 2,
"text": "<p><span class=\"math-container\">$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</span>\n<span class=\"math-container\">\\begin{align}\n\\int_{0}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} & =\n\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} +\n\\int_{\\pi/2}^{\\pi}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}}\n\\\\[5mm] &=\n2\\int_{0}^{\\pi/2}{\\dd x \\over 4 - 3\\cos^{2}\\pars{x}} =\n2\\int_{0}^{\\pi/2}{\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\sec^{2}\\pars{x} - 3}\n\\\\[5mm] & =\n\\int_{0}^{\\pi/2}{2\\sec^{2}\\pars{x}\\,\\dd x \\over 4\\tan^{2}\\pars{x} + 1}\n\\\\[5mm] & \\stackrel{t\\ \\equiv\\ 2\\tan\\pars{x}}{=}\\,\\,\\,\n\\int_{0}^{\\infty}{\\dd t \\over t^{2} + 1} = \\bbx{\\ds{\\pi \\over 2}}\n\\end{align}</span></p>\n"
}
] | Question: Is there a change of variables that allows one to calculate $\int_0^\pi \frac{1}{4-3\cos^2 x}\, \mathrm dx$ avoiding improperties?
Answer 1: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 2: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 3: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
Answer 4: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 5: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 6: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
Answer 7: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 8: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 9: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
Answer 10: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 11: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 12: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
Answer 13: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 14: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 15: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
Answer 16: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 17: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 18: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
Answer 19: <p>One approach would be to express the integrand as $$\frac{1}{4}\left(\frac{1}{2-\sqrt{3}\cos x} +\frac{1}{2+\sqrt{3}\cos x} \right) $$ and then note that the integral over $[0,\pi]$ for both integrands are equal so that the original integral is equal to $$\frac{1}{2}\int_{0}^{\pi}\frac{dx}{2+\sqrt{3}\cos x} $$ Next we put a highly non-obvious substitution $$(2+\sqrt{3}\cos x) (2-\sqrt{3}\cos y) = 1$$ to reduce the integral to $$\frac{1}{2}\int_{0}^{\pi}dy=\frac{\pi}{2}$$ More generally if $a>|b|$ then the substitution $$(a+b\cos x) (a-b\cos y) =a^{2}-b^{2}$$ yields the equations $$\sin x=\frac{\sqrt{a^{2}-b^{2}}\sin y} {a-b\cos y}, \, \frac{dx} {a+b\cos x} = \frac{dy} {\sqrt{a^{2}-b^{2}}}$$ so that $$\int_{0}^{\pi}\frac{dx}{a+b\cos x} =\frac{\pi} {\sqrt{a^{2}-b^{2}}}$$ Once again the above substitution comes straight from problem no $4$, page $266$ of Hardy's <em>A Course of Pure Mathematics</em>, 10th edition. </p>
Answer 20: <p>To avoid improper integrals for the half-angle substitution </p>
<p>$$\int_0^\pi \frac{1}{4-3\cos^2 x}\, dx = \int_0^{\pi/2} \frac{1}{4-3\cos^2 x}\, dx +\int_{0}^{\pi/2} \frac{1}{4-3\sin^2 x}\, dx $$</p>
<p>Then evaluate each one using $t = \tan(x/2)$</p>
Answer 21: <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\int_{0}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}} & =
\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} +
\int_{\pi/2}^{\pi}{\dd x \over 4 - 3\cos^{2}\pars{x}}
\\[5mm] &=
2\int_{0}^{\pi/2}{\dd x \over 4 - 3\cos^{2}\pars{x}} =
2\int_{0}^{\pi/2}{\sec^{2}\pars{x}\,\dd x \over 4\sec^{2}\pars{x} - 3}
\\[5mm] & =
\int_{0}^{\pi/2}{2\sec^{2}\pars{x}\,\dd x \over 4\tan^{2}\pars{x} + 1}
\\[5mm] & \stackrel{t\ \equiv\ 2\tan\pars{x}}{=}\,\,\,
\int_{0}^{\infty}{\dd t \over t^{2} + 1} = \bbx{\ds{\pi \over 2}}
\end{align}</span></p>
|
How to prove proposition $2.2.14$ in Tao Analysis $I$? | 2,103,897 | 5 | https://math.stackexchange.com/questions/2103897/how-to-prove-proposition-2-2-14-in-tao-analysis-i | <p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$. Suppose that for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$ </p>
<p><strong>My Proof:</strong> Let $Q(n)$ be the property that $P(m)$ holds for all $m_0\leq m<n.$ </p>
<p><em>Base Case:</em> $Q(0)$ is vacuously true since there are not natural numbers less than $0$. </p>
<p><em>Inductive Step:</em> Suppose $Q(n)$ holds true. Then this means that $P(m)$ is true for all $m_0\leq m<n.$</p>
<p>We have to show that $Q(n++)$ is true, where $n++$ denotes the successor of $n$. This is equivalent to proving that $P(n)$ holds because $P(m)$ is already true for all natural numbers $m_0\leq m<n.$ And this is the part where I am getting stuck because there does not seem to be a clear implication from the induction step that $P(n)$ should be true. </p>
<p>One way to fix this could be to let $Q(n)$ be the property that if $P(m)$ holds for all $m_0\leq m<n$ then $P(n)$ is true, but I am not sure whether this is an appropriate way to define a property. </p>
| [
{
"answer_id": 3127180,
"score": 2,
"text": "<p>You've probably figured this out by now but here's my take:</p>\n\n<p>We wanna show <span class=\"math-container\">$Q(n)$</span> is true for <span class=\"math-container\">$n\\ge m_0$</span> given <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> by induction on <span class=\"math-container\">$n$</span>.</p>\n\n<p>Let <span class=\"math-container\">$n\\le m_0$</span>. Then <span class=\"math-container\">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class=\"math-container\">$Q(n)$</span> is true for some <span class=\"math-container\">$n\\ge m_0$</span>. Well <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> so <span class=\"math-container\">$P(n)$</span> is true. This means that <span class=\"math-container\">$P(m')$</span> is true for <span class=\"math-container\">$m_0\\le m'<n+1$</span> which is another way of saying <span class=\"math-container\">$Q(n+1)$</span>. So the induction is closed, and <span class=\"math-container\">$Q(n)$</span> for every <span class=\"math-container\">$n\\ge m_0$</span>.</p>\n\n<p>I was confused similarly. The trick is that we need to suppose <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class=\"math-container\">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>\n"
},
{
"answer_id": 2103941,
"score": 1,
"text": "<p>$Q(m_0)$ is vacuously true.</p>\n\n<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\\le m<n$</p>\n\n<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>\n\n<p>\"if $P(m)$ is true for all natural \nnumbers $m_0 \\leq m < n$, then $P(n)$ is also true.\"</p>\n\n<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>\n\n<p>$P(n)$ is true for all $m_0\\le m<n+1$</p>\n\n<p>So $Q(n+1)$ is true.</p>\n"
},
{
"answer_id": 4113331,
"score": 1,
"text": "<h2>Slightly long approach.</h2>\n<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>\n<p>I have seen a fair bit of some common errors in many answers, one being that <span class=\"math-container\">$m=0$</span> has been used in the base case of induction proofs and not <span class=\"math-container\">$m=m_0$</span>.</p>\n<h3>Motivation for this late post.</h3>\n<p>Most of the answers I have seen used Terence's hint of defining <span class=\"math-container\">$Q(n)$</span> to be the property that <span class=\"math-container\">$P(m)$</span> is true for all <span class=\"math-container\">$m_0\\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; P(m)\\tag{*}\\label{*}$$</span></p>\n<p>Many for the answers I have seen the proofs end up taking the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)$$</span></p>\n<p>Which can be rewritten in terms of our definition of <span class=\"math-container\">$Q(n)$</span> as</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)\\label{1}\\tag{1}$$</span></p>\n<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>\n<p><span class=\"math-container\">$$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)\\tag{2}\\label{2}$$</span></p>\n<p>So if we approach the proof by (\\ref{1}) after proving (\\ref{2}), it would the the same as proving (\\ref{*}). But why would we rather want to use the right hand side of (\\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>\n<h3>Difference in induction hypotheses.</h3>\n<p>For a implication such as (\\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>\n<p>In (\\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class=\"math-container\">$Q(m_0)$</span> (<em>base case</em>) and <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>. The left hand side of (\\ref{1}) is really beneficial since that helps in proving <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>.</p>\n<p>We can show that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> would be something along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for some arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true.</p>\n</blockquote>\n<p>Here our goal becomes to show that <span class=\"math-container\">$Q(m++)$</span> or rather <span class=\"math-container\">$(\\forall m_0\\leq m' < m++\\; P(m'))$</span> is true, we will do this by showing that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class=\"math-container\">$Q(m)$</span> that <span class=\"math-container\">$(\\forall m_0\\leq m' < m\\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>\n<p>Had we tried to prove <span class=\"math-container\">$\\forall m\\geq m_0\\; P(m)$</span> instead as the right hand side of (\\ref{1}), then our induction hypothesis would be along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$P(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show <span class=\"math-container\">$P(m++)$</span> is true.</p>\n</blockquote>\n<p>This inductive hypothesis gives us that <span class=\"math-container\">$P('m)$</span> is true but we can't really use the original hypothesis <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> much since that would require us to show that <span class=\"math-container\">$Q(m++)$</span> to infer (by modus ponens) that <span class=\"math-container\">$P(m++)$</span> is true. For that we would need to know that <span class=\"math-container\">$P(m')$</span> holds true for each natural number <span class=\"math-container\">$m'$</span> greater than or equal to <span class=\"math-container\">$m_0$</span> but less than <span class=\"math-container\">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class=\"math-container\">$\\forall m\\geq m_0 Q(m)$</span>.</p>\n<h1>Tl;dr: the proof</h1>\n<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>\n<h3>Some what of a lemma,</h3>\n<p>Now we aim to prove (\\ref{2}), <span class=\"math-container\">$(\\Rightarrow)$</span> Suppose that <span class=\"math-container\">$\\forall m\\geq m_0\\;P(m)$</span>. Let <span class=\"math-container\">$n$</span> be a arbitrary natural number such that <span class=\"math-container\">$n\\geq m_0$</span>, we aim to prove <span class=\"math-container\">$Q(n)$</span> to be true which by definition amount to proving that <span class=\"math-container\">$\\forall m_0\\leq m' < n\\; P(m')$</span> is true, which follows trivially since by assumption <span class=\"math-container\">$P(k)$</span> is true for all natural numbers <span class=\"math-container\">$k\\geq m_0$</span>, thus <span class=\"math-container\">$Q(n)$</span> is true.</p>\n<p>Suppose that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, and let <span class=\"math-container\">$m$</span> be a arbitrary natural number such that <span class=\"math-container\">$m\\geq m_0$</span>. We aim to show that <span class=\"math-container\">$P(m)$</span> is true, since <span class=\"math-container\">$m\\geq m_0$</span> and <span class=\"math-container\">$m++ > m$</span> (this can be proven) then by transitivity <span class=\"math-container\">$m++ > m_0$</span> thus by our hypothesis <span class=\"math-container\">$Q(m++)$</span> and by definition of <span class=\"math-container\">$Q(m++)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true.\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h3>The main proof.</h3>\n<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\\ref{1}) using many of the facts we have already laid out.</p>\n<p>Suppose <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span>,\nthat is for each <span class=\"math-container\">$m\\geq m_0$</span> that if <span class=\"math-container\">$Q(m)$</span> is true then <span class=\"math-container\">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, for the base case let <span class=\"math-container\">$n=m_0$</span> we then aim to prove that <span class=\"math-container\">$Q(m_0)$</span> is true. By definition we aim to show <span class=\"math-container\">$\\forall m_0\\geq m' < m_0\\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class=\"math-container\">$m_0 < m'$</span> or <span class=\"math-container\">$m_0 =m$</span> and at the same time <span class=\"math-container\">$m'<m_0$</span> this this is vacuously true, thus <span class=\"math-container\">$Q(m_0)$</span> is vacuously true.</p>\n<p>Now suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true. Since <span class=\"math-container\">$Q(m)$</span> holds then <span class=\"math-container\">$P(m')$</span> for each natural number <span class=\"math-container\">$m'$</span> less than or equal to <span class=\"math-container\">$m_0$</span> and less than <span class=\"math-container\">$m$</span>, and by our assumption <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true. Thus <span class=\"math-container\">$P(m')$</span> hold for each natural number <span class=\"math-container\">$m'$</span> such that <span class=\"math-container\">$m_0\\geq m' \\leq m$</span> or rather <span class=\"math-container\">$m_0\\geq m' < m++$</span> which is the definition of <span class=\"math-container\">$Q(m++)$</span>, therefore we have closed the induction.</p>\n<p>Since we have proven <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall n\\geq m_0\\; Q(n)$</span> and <span class=\"math-container\">$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)$</span>, we have equivalently proven (\\ref{*})\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h2>Conclusion.</h2>\n<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>\n"
},
{
"answer_id": 2103918,
"score": 0,
"text": "<p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>\n\n<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be \na natural number, and let $P(m)$ be a property pertaining to an \narbitrary natural number $m$ <strong>such that</strong> for each $m \\geq m_0$, we \nhave the following implication: if $P(m')$ is true for all natural \nnumbers $m_0 \\leq m' < m$, then $P(m)$ is also true. (In particular, \nthis means that $P(m_0)$ is true since in this case, the hypothesis is vacuous\n.) Then we can conclude that $P(m)$ is true for all natural \nnumbers $m\\geq m_0.$</p>\n\n<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\\leq m<n$ and $P(n)$. </p>\n\n<p>This is what I think, but maybe I am wrong. </p>\n"
},
{
"answer_id": 3127180,
"score": 2,
"text": "<p>You've probably figured this out by now but here's my take:</p>\n\n<p>We wanna show <span class=\"math-container\">$Q(n)$</span> is true for <span class=\"math-container\">$n\\ge m_0$</span> given <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> by induction on <span class=\"math-container\">$n$</span>.</p>\n\n<p>Let <span class=\"math-container\">$n\\le m_0$</span>. Then <span class=\"math-container\">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class=\"math-container\">$Q(n)$</span> is true for some <span class=\"math-container\">$n\\ge m_0$</span>. Well <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> so <span class=\"math-container\">$P(n)$</span> is true. This means that <span class=\"math-container\">$P(m')$</span> is true for <span class=\"math-container\">$m_0\\le m'<n+1$</span> which is another way of saying <span class=\"math-container\">$Q(n+1)$</span>. So the induction is closed, and <span class=\"math-container\">$Q(n)$</span> for every <span class=\"math-container\">$n\\ge m_0$</span>.</p>\n\n<p>I was confused similarly. The trick is that we need to suppose <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class=\"math-container\">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>\n"
},
{
"answer_id": 2103941,
"score": 1,
"text": "<p>$Q(m_0)$ is vacuously true.</p>\n\n<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\\le m<n$</p>\n\n<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>\n\n<p>\"if $P(m)$ is true for all natural \nnumbers $m_0 \\leq m < n$, then $P(n)$ is also true.\"</p>\n\n<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>\n\n<p>$P(n)$ is true for all $m_0\\le m<n+1$</p>\n\n<p>So $Q(n+1)$ is true.</p>\n"
},
{
"answer_id": 4113331,
"score": 1,
"text": "<h2>Slightly long approach.</h2>\n<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>\n<p>I have seen a fair bit of some common errors in many answers, one being that <span class=\"math-container\">$m=0$</span> has been used in the base case of induction proofs and not <span class=\"math-container\">$m=m_0$</span>.</p>\n<h3>Motivation for this late post.</h3>\n<p>Most of the answers I have seen used Terence's hint of defining <span class=\"math-container\">$Q(n)$</span> to be the property that <span class=\"math-container\">$P(m)$</span> is true for all <span class=\"math-container\">$m_0\\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; P(m)\\tag{*}\\label{*}$$</span></p>\n<p>Many for the answers I have seen the proofs end up taking the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)$$</span></p>\n<p>Which can be rewritten in terms of our definition of <span class=\"math-container\">$Q(n)$</span> as</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)\\label{1}\\tag{1}$$</span></p>\n<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>\n<p><span class=\"math-container\">$$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)\\tag{2}\\label{2}$$</span></p>\n<p>So if we approach the proof by (\\ref{1}) after proving (\\ref{2}), it would the the same as proving (\\ref{*}). But why would we rather want to use the right hand side of (\\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>\n<h3>Difference in induction hypotheses.</h3>\n<p>For a implication such as (\\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>\n<p>In (\\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class=\"math-container\">$Q(m_0)$</span> (<em>base case</em>) and <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>. The left hand side of (\\ref{1}) is really beneficial since that helps in proving <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>.</p>\n<p>We can show that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> would be something along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for some arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true.</p>\n</blockquote>\n<p>Here our goal becomes to show that <span class=\"math-container\">$Q(m++)$</span> or rather <span class=\"math-container\">$(\\forall m_0\\leq m' < m++\\; P(m'))$</span> is true, we will do this by showing that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class=\"math-container\">$Q(m)$</span> that <span class=\"math-container\">$(\\forall m_0\\leq m' < m\\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>\n<p>Had we tried to prove <span class=\"math-container\">$\\forall m\\geq m_0\\; P(m)$</span> instead as the right hand side of (\\ref{1}), then our induction hypothesis would be along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$P(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show <span class=\"math-container\">$P(m++)$</span> is true.</p>\n</blockquote>\n<p>This inductive hypothesis gives us that <span class=\"math-container\">$P('m)$</span> is true but we can't really use the original hypothesis <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> much since that would require us to show that <span class=\"math-container\">$Q(m++)$</span> to infer (by modus ponens) that <span class=\"math-container\">$P(m++)$</span> is true. For that we would need to know that <span class=\"math-container\">$P(m')$</span> holds true for each natural number <span class=\"math-container\">$m'$</span> greater than or equal to <span class=\"math-container\">$m_0$</span> but less than <span class=\"math-container\">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class=\"math-container\">$\\forall m\\geq m_0 Q(m)$</span>.</p>\n<h1>Tl;dr: the proof</h1>\n<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>\n<h3>Some what of a lemma,</h3>\n<p>Now we aim to prove (\\ref{2}), <span class=\"math-container\">$(\\Rightarrow)$</span> Suppose that <span class=\"math-container\">$\\forall m\\geq m_0\\;P(m)$</span>. Let <span class=\"math-container\">$n$</span> be a arbitrary natural number such that <span class=\"math-container\">$n\\geq m_0$</span>, we aim to prove <span class=\"math-container\">$Q(n)$</span> to be true which by definition amount to proving that <span class=\"math-container\">$\\forall m_0\\leq m' < n\\; P(m')$</span> is true, which follows trivially since by assumption <span class=\"math-container\">$P(k)$</span> is true for all natural numbers <span class=\"math-container\">$k\\geq m_0$</span>, thus <span class=\"math-container\">$Q(n)$</span> is true.</p>\n<p>Suppose that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, and let <span class=\"math-container\">$m$</span> be a arbitrary natural number such that <span class=\"math-container\">$m\\geq m_0$</span>. We aim to show that <span class=\"math-container\">$P(m)$</span> is true, since <span class=\"math-container\">$m\\geq m_0$</span> and <span class=\"math-container\">$m++ > m$</span> (this can be proven) then by transitivity <span class=\"math-container\">$m++ > m_0$</span> thus by our hypothesis <span class=\"math-container\">$Q(m++)$</span> and by definition of <span class=\"math-container\">$Q(m++)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true.\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h3>The main proof.</h3>\n<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\\ref{1}) using many of the facts we have already laid out.</p>\n<p>Suppose <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span>,\nthat is for each <span class=\"math-container\">$m\\geq m_0$</span> that if <span class=\"math-container\">$Q(m)$</span> is true then <span class=\"math-container\">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, for the base case let <span class=\"math-container\">$n=m_0$</span> we then aim to prove that <span class=\"math-container\">$Q(m_0)$</span> is true. By definition we aim to show <span class=\"math-container\">$\\forall m_0\\geq m' < m_0\\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class=\"math-container\">$m_0 < m'$</span> or <span class=\"math-container\">$m_0 =m$</span> and at the same time <span class=\"math-container\">$m'<m_0$</span> this this is vacuously true, thus <span class=\"math-container\">$Q(m_0)$</span> is vacuously true.</p>\n<p>Now suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true. Since <span class=\"math-container\">$Q(m)$</span> holds then <span class=\"math-container\">$P(m')$</span> for each natural number <span class=\"math-container\">$m'$</span> less than or equal to <span class=\"math-container\">$m_0$</span> and less than <span class=\"math-container\">$m$</span>, and by our assumption <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true. Thus <span class=\"math-container\">$P(m')$</span> hold for each natural number <span class=\"math-container\">$m'$</span> such that <span class=\"math-container\">$m_0\\geq m' \\leq m$</span> or rather <span class=\"math-container\">$m_0\\geq m' < m++$</span> which is the definition of <span class=\"math-container\">$Q(m++)$</span>, therefore we have closed the induction.</p>\n<p>Since we have proven <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall n\\geq m_0\\; Q(n)$</span> and <span class=\"math-container\">$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)$</span>, we have equivalently proven (\\ref{*})\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h2>Conclusion.</h2>\n<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>\n"
},
{
"answer_id": 2103918,
"score": 0,
"text": "<p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>\n\n<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be \na natural number, and let $P(m)$ be a property pertaining to an \narbitrary natural number $m$ <strong>such that</strong> for each $m \\geq m_0$, we \nhave the following implication: if $P(m')$ is true for all natural \nnumbers $m_0 \\leq m' < m$, then $P(m)$ is also true. (In particular, \nthis means that $P(m_0)$ is true since in this case, the hypothesis is vacuous\n.) Then we can conclude that $P(m)$ is true for all natural \nnumbers $m\\geq m_0.$</p>\n\n<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\\leq m<n$ and $P(n)$. </p>\n\n<p>This is what I think, but maybe I am wrong. </p>\n"
},
{
"answer_id": 3127180,
"score": 2,
"text": "<p>You've probably figured this out by now but here's my take:</p>\n\n<p>We wanna show <span class=\"math-container\">$Q(n)$</span> is true for <span class=\"math-container\">$n\\ge m_0$</span> given <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> by induction on <span class=\"math-container\">$n$</span>.</p>\n\n<p>Let <span class=\"math-container\">$n\\le m_0$</span>. Then <span class=\"math-container\">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class=\"math-container\">$Q(n)$</span> is true for some <span class=\"math-container\">$n\\ge m_0$</span>. Well <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> so <span class=\"math-container\">$P(n)$</span> is true. This means that <span class=\"math-container\">$P(m')$</span> is true for <span class=\"math-container\">$m_0\\le m'<n+1$</span> which is another way of saying <span class=\"math-container\">$Q(n+1)$</span>. So the induction is closed, and <span class=\"math-container\">$Q(n)$</span> for every <span class=\"math-container\">$n\\ge m_0$</span>.</p>\n\n<p>I was confused similarly. The trick is that we need to suppose <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class=\"math-container\">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>\n"
},
{
"answer_id": 2103941,
"score": 1,
"text": "<p>$Q(m_0)$ is vacuously true.</p>\n\n<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\\le m<n$</p>\n\n<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>\n\n<p>\"if $P(m)$ is true for all natural \nnumbers $m_0 \\leq m < n$, then $P(n)$ is also true.\"</p>\n\n<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>\n\n<p>$P(n)$ is true for all $m_0\\le m<n+1$</p>\n\n<p>So $Q(n+1)$ is true.</p>\n"
},
{
"answer_id": 4113331,
"score": 1,
"text": "<h2>Slightly long approach.</h2>\n<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>\n<p>I have seen a fair bit of some common errors in many answers, one being that <span class=\"math-container\">$m=0$</span> has been used in the base case of induction proofs and not <span class=\"math-container\">$m=m_0$</span>.</p>\n<h3>Motivation for this late post.</h3>\n<p>Most of the answers I have seen used Terence's hint of defining <span class=\"math-container\">$Q(n)$</span> to be the property that <span class=\"math-container\">$P(m)$</span> is true for all <span class=\"math-container\">$m_0\\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; P(m)\\tag{*}\\label{*}$$</span></p>\n<p>Many for the answers I have seen the proofs end up taking the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)$$</span></p>\n<p>Which can be rewritten in terms of our definition of <span class=\"math-container\">$Q(n)$</span> as</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)\\label{1}\\tag{1}$$</span></p>\n<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>\n<p><span class=\"math-container\">$$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)\\tag{2}\\label{2}$$</span></p>\n<p>So if we approach the proof by (\\ref{1}) after proving (\\ref{2}), it would the the same as proving (\\ref{*}). But why would we rather want to use the right hand side of (\\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>\n<h3>Difference in induction hypotheses.</h3>\n<p>For a implication such as (\\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>\n<p>In (\\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class=\"math-container\">$Q(m_0)$</span> (<em>base case</em>) and <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>. The left hand side of (\\ref{1}) is really beneficial since that helps in proving <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>.</p>\n<p>We can show that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> would be something along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for some arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true.</p>\n</blockquote>\n<p>Here our goal becomes to show that <span class=\"math-container\">$Q(m++)$</span> or rather <span class=\"math-container\">$(\\forall m_0\\leq m' < m++\\; P(m'))$</span> is true, we will do this by showing that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class=\"math-container\">$Q(m)$</span> that <span class=\"math-container\">$(\\forall m_0\\leq m' < m\\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>\n<p>Had we tried to prove <span class=\"math-container\">$\\forall m\\geq m_0\\; P(m)$</span> instead as the right hand side of (\\ref{1}), then our induction hypothesis would be along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$P(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show <span class=\"math-container\">$P(m++)$</span> is true.</p>\n</blockquote>\n<p>This inductive hypothesis gives us that <span class=\"math-container\">$P('m)$</span> is true but we can't really use the original hypothesis <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> much since that would require us to show that <span class=\"math-container\">$Q(m++)$</span> to infer (by modus ponens) that <span class=\"math-container\">$P(m++)$</span> is true. For that we would need to know that <span class=\"math-container\">$P(m')$</span> holds true for each natural number <span class=\"math-container\">$m'$</span> greater than or equal to <span class=\"math-container\">$m_0$</span> but less than <span class=\"math-container\">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class=\"math-container\">$\\forall m\\geq m_0 Q(m)$</span>.</p>\n<h1>Tl;dr: the proof</h1>\n<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>\n<h3>Some what of a lemma,</h3>\n<p>Now we aim to prove (\\ref{2}), <span class=\"math-container\">$(\\Rightarrow)$</span> Suppose that <span class=\"math-container\">$\\forall m\\geq m_0\\;P(m)$</span>. Let <span class=\"math-container\">$n$</span> be a arbitrary natural number such that <span class=\"math-container\">$n\\geq m_0$</span>, we aim to prove <span class=\"math-container\">$Q(n)$</span> to be true which by definition amount to proving that <span class=\"math-container\">$\\forall m_0\\leq m' < n\\; P(m')$</span> is true, which follows trivially since by assumption <span class=\"math-container\">$P(k)$</span> is true for all natural numbers <span class=\"math-container\">$k\\geq m_0$</span>, thus <span class=\"math-container\">$Q(n)$</span> is true.</p>\n<p>Suppose that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, and let <span class=\"math-container\">$m$</span> be a arbitrary natural number such that <span class=\"math-container\">$m\\geq m_0$</span>. We aim to show that <span class=\"math-container\">$P(m)$</span> is true, since <span class=\"math-container\">$m\\geq m_0$</span> and <span class=\"math-container\">$m++ > m$</span> (this can be proven) then by transitivity <span class=\"math-container\">$m++ > m_0$</span> thus by our hypothesis <span class=\"math-container\">$Q(m++)$</span> and by definition of <span class=\"math-container\">$Q(m++)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true.\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h3>The main proof.</h3>\n<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\\ref{1}) using many of the facts we have already laid out.</p>\n<p>Suppose <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span>,\nthat is for each <span class=\"math-container\">$m\\geq m_0$</span> that if <span class=\"math-container\">$Q(m)$</span> is true then <span class=\"math-container\">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, for the base case let <span class=\"math-container\">$n=m_0$</span> we then aim to prove that <span class=\"math-container\">$Q(m_0)$</span> is true. By definition we aim to show <span class=\"math-container\">$\\forall m_0\\geq m' < m_0\\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class=\"math-container\">$m_0 < m'$</span> or <span class=\"math-container\">$m_0 =m$</span> and at the same time <span class=\"math-container\">$m'<m_0$</span> this this is vacuously true, thus <span class=\"math-container\">$Q(m_0)$</span> is vacuously true.</p>\n<p>Now suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true. Since <span class=\"math-container\">$Q(m)$</span> holds then <span class=\"math-container\">$P(m')$</span> for each natural number <span class=\"math-container\">$m'$</span> less than or equal to <span class=\"math-container\">$m_0$</span> and less than <span class=\"math-container\">$m$</span>, and by our assumption <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true. Thus <span class=\"math-container\">$P(m')$</span> hold for each natural number <span class=\"math-container\">$m'$</span> such that <span class=\"math-container\">$m_0\\geq m' \\leq m$</span> or rather <span class=\"math-container\">$m_0\\geq m' < m++$</span> which is the definition of <span class=\"math-container\">$Q(m++)$</span>, therefore we have closed the induction.</p>\n<p>Since we have proven <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall n\\geq m_0\\; Q(n)$</span> and <span class=\"math-container\">$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)$</span>, we have equivalently proven (\\ref{*})\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h2>Conclusion.</h2>\n<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>\n"
},
{
"answer_id": 2103918,
"score": 0,
"text": "<p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>\n\n<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be \na natural number, and let $P(m)$ be a property pertaining to an \narbitrary natural number $m$ <strong>such that</strong> for each $m \\geq m_0$, we \nhave the following implication: if $P(m')$ is true for all natural \nnumbers $m_0 \\leq m' < m$, then $P(m)$ is also true. (In particular, \nthis means that $P(m_0)$ is true since in this case, the hypothesis is vacuous\n.) Then we can conclude that $P(m)$ is true for all natural \nnumbers $m\\geq m_0.$</p>\n\n<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\\leq m<n$ and $P(n)$. </p>\n\n<p>This is what I think, but maybe I am wrong. </p>\n"
},
{
"answer_id": 3127180,
"score": 2,
"text": "<p>You've probably figured this out by now but here's my take:</p>\n\n<p>We wanna show <span class=\"math-container\">$Q(n)$</span> is true for <span class=\"math-container\">$n\\ge m_0$</span> given <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> by induction on <span class=\"math-container\">$n$</span>.</p>\n\n<p>Let <span class=\"math-container\">$n\\le m_0$</span>. Then <span class=\"math-container\">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class=\"math-container\">$Q(n)$</span> is true for some <span class=\"math-container\">$n\\ge m_0$</span>. Well <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> so <span class=\"math-container\">$P(n)$</span> is true. This means that <span class=\"math-container\">$P(m')$</span> is true for <span class=\"math-container\">$m_0\\le m'<n+1$</span> which is another way of saying <span class=\"math-container\">$Q(n+1)$</span>. So the induction is closed, and <span class=\"math-container\">$Q(n)$</span> for every <span class=\"math-container\">$n\\ge m_0$</span>.</p>\n\n<p>I was confused similarly. The trick is that we need to suppose <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class=\"math-container\">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>\n"
},
{
"answer_id": 2103941,
"score": 1,
"text": "<p>$Q(m_0)$ is vacuously true.</p>\n\n<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\\le m<n$</p>\n\n<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>\n\n<p>\"if $P(m)$ is true for all natural \nnumbers $m_0 \\leq m < n$, then $P(n)$ is also true.\"</p>\n\n<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>\n\n<p>$P(n)$ is true for all $m_0\\le m<n+1$</p>\n\n<p>So $Q(n+1)$ is true.</p>\n"
},
{
"answer_id": 4113331,
"score": 1,
"text": "<h2>Slightly long approach.</h2>\n<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>\n<p>I have seen a fair bit of some common errors in many answers, one being that <span class=\"math-container\">$m=0$</span> has been used in the base case of induction proofs and not <span class=\"math-container\">$m=m_0$</span>.</p>\n<h3>Motivation for this late post.</h3>\n<p>Most of the answers I have seen used Terence's hint of defining <span class=\"math-container\">$Q(n)$</span> to be the property that <span class=\"math-container\">$P(m)$</span> is true for all <span class=\"math-container\">$m_0\\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; P(m)\\tag{*}\\label{*}$$</span></p>\n<p>Many for the answers I have seen the proofs end up taking the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)$$</span></p>\n<p>Which can be rewritten in terms of our definition of <span class=\"math-container\">$Q(n)$</span> as</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)\\label{1}\\tag{1}$$</span></p>\n<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>\n<p><span class=\"math-container\">$$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)\\tag{2}\\label{2}$$</span></p>\n<p>So if we approach the proof by (\\ref{1}) after proving (\\ref{2}), it would the the same as proving (\\ref{*}). But why would we rather want to use the right hand side of (\\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>\n<h3>Difference in induction hypotheses.</h3>\n<p>For a implication such as (\\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>\n<p>In (\\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class=\"math-container\">$Q(m_0)$</span> (<em>base case</em>) and <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>. The left hand side of (\\ref{1}) is really beneficial since that helps in proving <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>.</p>\n<p>We can show that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> would be something along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for some arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true.</p>\n</blockquote>\n<p>Here our goal becomes to show that <span class=\"math-container\">$Q(m++)$</span> or rather <span class=\"math-container\">$(\\forall m_0\\leq m' < m++\\; P(m'))$</span> is true, we will do this by showing that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class=\"math-container\">$Q(m)$</span> that <span class=\"math-container\">$(\\forall m_0\\leq m' < m\\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>\n<p>Had we tried to prove <span class=\"math-container\">$\\forall m\\geq m_0\\; P(m)$</span> instead as the right hand side of (\\ref{1}), then our induction hypothesis would be along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$P(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show <span class=\"math-container\">$P(m++)$</span> is true.</p>\n</blockquote>\n<p>This inductive hypothesis gives us that <span class=\"math-container\">$P('m)$</span> is true but we can't really use the original hypothesis <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> much since that would require us to show that <span class=\"math-container\">$Q(m++)$</span> to infer (by modus ponens) that <span class=\"math-container\">$P(m++)$</span> is true. For that we would need to know that <span class=\"math-container\">$P(m')$</span> holds true for each natural number <span class=\"math-container\">$m'$</span> greater than or equal to <span class=\"math-container\">$m_0$</span> but less than <span class=\"math-container\">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class=\"math-container\">$\\forall m\\geq m_0 Q(m)$</span>.</p>\n<h1>Tl;dr: the proof</h1>\n<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>\n<h3>Some what of a lemma,</h3>\n<p>Now we aim to prove (\\ref{2}), <span class=\"math-container\">$(\\Rightarrow)$</span> Suppose that <span class=\"math-container\">$\\forall m\\geq m_0\\;P(m)$</span>. Let <span class=\"math-container\">$n$</span> be a arbitrary natural number such that <span class=\"math-container\">$n\\geq m_0$</span>, we aim to prove <span class=\"math-container\">$Q(n)$</span> to be true which by definition amount to proving that <span class=\"math-container\">$\\forall m_0\\leq m' < n\\; P(m')$</span> is true, which follows trivially since by assumption <span class=\"math-container\">$P(k)$</span> is true for all natural numbers <span class=\"math-container\">$k\\geq m_0$</span>, thus <span class=\"math-container\">$Q(n)$</span> is true.</p>\n<p>Suppose that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, and let <span class=\"math-container\">$m$</span> be a arbitrary natural number such that <span class=\"math-container\">$m\\geq m_0$</span>. We aim to show that <span class=\"math-container\">$P(m)$</span> is true, since <span class=\"math-container\">$m\\geq m_0$</span> and <span class=\"math-container\">$m++ > m$</span> (this can be proven) then by transitivity <span class=\"math-container\">$m++ > m_0$</span> thus by our hypothesis <span class=\"math-container\">$Q(m++)$</span> and by definition of <span class=\"math-container\">$Q(m++)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true.\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h3>The main proof.</h3>\n<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\\ref{1}) using many of the facts we have already laid out.</p>\n<p>Suppose <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span>,\nthat is for each <span class=\"math-container\">$m\\geq m_0$</span> that if <span class=\"math-container\">$Q(m)$</span> is true then <span class=\"math-container\">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, for the base case let <span class=\"math-container\">$n=m_0$</span> we then aim to prove that <span class=\"math-container\">$Q(m_0)$</span> is true. By definition we aim to show <span class=\"math-container\">$\\forall m_0\\geq m' < m_0\\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class=\"math-container\">$m_0 < m'$</span> or <span class=\"math-container\">$m_0 =m$</span> and at the same time <span class=\"math-container\">$m'<m_0$</span> this this is vacuously true, thus <span class=\"math-container\">$Q(m_0)$</span> is vacuously true.</p>\n<p>Now suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true. Since <span class=\"math-container\">$Q(m)$</span> holds then <span class=\"math-container\">$P(m')$</span> for each natural number <span class=\"math-container\">$m'$</span> less than or equal to <span class=\"math-container\">$m_0$</span> and less than <span class=\"math-container\">$m$</span>, and by our assumption <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true. Thus <span class=\"math-container\">$P(m')$</span> hold for each natural number <span class=\"math-container\">$m'$</span> such that <span class=\"math-container\">$m_0\\geq m' \\leq m$</span> or rather <span class=\"math-container\">$m_0\\geq m' < m++$</span> which is the definition of <span class=\"math-container\">$Q(m++)$</span>, therefore we have closed the induction.</p>\n<p>Since we have proven <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall n\\geq m_0\\; Q(n)$</span> and <span class=\"math-container\">$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)$</span>, we have equivalently proven (\\ref{*})\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h2>Conclusion.</h2>\n<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>\n"
},
{
"answer_id": 2103918,
"score": 0,
"text": "<p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>\n\n<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be \na natural number, and let $P(m)$ be a property pertaining to an \narbitrary natural number $m$ <strong>such that</strong> for each $m \\geq m_0$, we \nhave the following implication: if $P(m')$ is true for all natural \nnumbers $m_0 \\leq m' < m$, then $P(m)$ is also true. (In particular, \nthis means that $P(m_0)$ is true since in this case, the hypothesis is vacuous\n.) Then we can conclude that $P(m)$ is true for all natural \nnumbers $m\\geq m_0.$</p>\n\n<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\\leq m<n$ and $P(n)$. </p>\n\n<p>This is what I think, but maybe I am wrong. </p>\n"
},
{
"answer_id": 3127180,
"score": 2,
"text": "<p>You've probably figured this out by now but here's my take:</p>\n\n<p>We wanna show <span class=\"math-container\">$Q(n)$</span> is true for <span class=\"math-container\">$n\\ge m_0$</span> given <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> by induction on <span class=\"math-container\">$n$</span>.</p>\n\n<p>Let <span class=\"math-container\">$n\\le m_0$</span>. Then <span class=\"math-container\">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class=\"math-container\">$Q(n)$</span> is true for some <span class=\"math-container\">$n\\ge m_0$</span>. Well <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> so <span class=\"math-container\">$P(n)$</span> is true. This means that <span class=\"math-container\">$P(m')$</span> is true for <span class=\"math-container\">$m_0\\le m'<n+1$</span> which is another way of saying <span class=\"math-container\">$Q(n+1)$</span>. So the induction is closed, and <span class=\"math-container\">$Q(n)$</span> for every <span class=\"math-container\">$n\\ge m_0$</span>.</p>\n\n<p>I was confused similarly. The trick is that we need to suppose <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class=\"math-container\">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>\n"
},
{
"answer_id": 2103941,
"score": 1,
"text": "<p>$Q(m_0)$ is vacuously true.</p>\n\n<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\\le m<n$</p>\n\n<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>\n\n<p>\"if $P(m)$ is true for all natural \nnumbers $m_0 \\leq m < n$, then $P(n)$ is also true.\"</p>\n\n<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>\n\n<p>$P(n)$ is true for all $m_0\\le m<n+1$</p>\n\n<p>So $Q(n+1)$ is true.</p>\n"
},
{
"answer_id": 4113331,
"score": 1,
"text": "<h2>Slightly long approach.</h2>\n<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>\n<p>I have seen a fair bit of some common errors in many answers, one being that <span class=\"math-container\">$m=0$</span> has been used in the base case of induction proofs and not <span class=\"math-container\">$m=m_0$</span>.</p>\n<h3>Motivation for this late post.</h3>\n<p>Most of the answers I have seen used Terence's hint of defining <span class=\"math-container\">$Q(n)$</span> to be the property that <span class=\"math-container\">$P(m)$</span> is true for all <span class=\"math-container\">$m_0\\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; P(m)\\tag{*}\\label{*}$$</span></p>\n<p>Many for the answers I have seen the proofs end up taking the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)$$</span></p>\n<p>Which can be rewritten in terms of our definition of <span class=\"math-container\">$Q(n)$</span> as</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)\\label{1}\\tag{1}$$</span></p>\n<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>\n<p><span class=\"math-container\">$$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)\\tag{2}\\label{2}$$</span></p>\n<p>So if we approach the proof by (\\ref{1}) after proving (\\ref{2}), it would the the same as proving (\\ref{*}). But why would we rather want to use the right hand side of (\\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>\n<h3>Difference in induction hypotheses.</h3>\n<p>For a implication such as (\\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>\n<p>In (\\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class=\"math-container\">$Q(m_0)$</span> (<em>base case</em>) and <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>. The left hand side of (\\ref{1}) is really beneficial since that helps in proving <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>.</p>\n<p>We can show that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> would be something along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for some arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true.</p>\n</blockquote>\n<p>Here our goal becomes to show that <span class=\"math-container\">$Q(m++)$</span> or rather <span class=\"math-container\">$(\\forall m_0\\leq m' < m++\\; P(m'))$</span> is true, we will do this by showing that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class=\"math-container\">$Q(m)$</span> that <span class=\"math-container\">$(\\forall m_0\\leq m' < m\\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>\n<p>Had we tried to prove <span class=\"math-container\">$\\forall m\\geq m_0\\; P(m)$</span> instead as the right hand side of (\\ref{1}), then our induction hypothesis would be along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$P(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show <span class=\"math-container\">$P(m++)$</span> is true.</p>\n</blockquote>\n<p>This inductive hypothesis gives us that <span class=\"math-container\">$P('m)$</span> is true but we can't really use the original hypothesis <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> much since that would require us to show that <span class=\"math-container\">$Q(m++)$</span> to infer (by modus ponens) that <span class=\"math-container\">$P(m++)$</span> is true. For that we would need to know that <span class=\"math-container\">$P(m')$</span> holds true for each natural number <span class=\"math-container\">$m'$</span> greater than or equal to <span class=\"math-container\">$m_0$</span> but less than <span class=\"math-container\">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class=\"math-container\">$\\forall m\\geq m_0 Q(m)$</span>.</p>\n<h1>Tl;dr: the proof</h1>\n<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>\n<h3>Some what of a lemma,</h3>\n<p>Now we aim to prove (\\ref{2}), <span class=\"math-container\">$(\\Rightarrow)$</span> Suppose that <span class=\"math-container\">$\\forall m\\geq m_0\\;P(m)$</span>. Let <span class=\"math-container\">$n$</span> be a arbitrary natural number such that <span class=\"math-container\">$n\\geq m_0$</span>, we aim to prove <span class=\"math-container\">$Q(n)$</span> to be true which by definition amount to proving that <span class=\"math-container\">$\\forall m_0\\leq m' < n\\; P(m')$</span> is true, which follows trivially since by assumption <span class=\"math-container\">$P(k)$</span> is true for all natural numbers <span class=\"math-container\">$k\\geq m_0$</span>, thus <span class=\"math-container\">$Q(n)$</span> is true.</p>\n<p>Suppose that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, and let <span class=\"math-container\">$m$</span> be a arbitrary natural number such that <span class=\"math-container\">$m\\geq m_0$</span>. We aim to show that <span class=\"math-container\">$P(m)$</span> is true, since <span class=\"math-container\">$m\\geq m_0$</span> and <span class=\"math-container\">$m++ > m$</span> (this can be proven) then by transitivity <span class=\"math-container\">$m++ > m_0$</span> thus by our hypothesis <span class=\"math-container\">$Q(m++)$</span> and by definition of <span class=\"math-container\">$Q(m++)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true.\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h3>The main proof.</h3>\n<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\\ref{1}) using many of the facts we have already laid out.</p>\n<p>Suppose <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span>,\nthat is for each <span class=\"math-container\">$m\\geq m_0$</span> that if <span class=\"math-container\">$Q(m)$</span> is true then <span class=\"math-container\">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, for the base case let <span class=\"math-container\">$n=m_0$</span> we then aim to prove that <span class=\"math-container\">$Q(m_0)$</span> is true. By definition we aim to show <span class=\"math-container\">$\\forall m_0\\geq m' < m_0\\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class=\"math-container\">$m_0 < m'$</span> or <span class=\"math-container\">$m_0 =m$</span> and at the same time <span class=\"math-container\">$m'<m_0$</span> this this is vacuously true, thus <span class=\"math-container\">$Q(m_0)$</span> is vacuously true.</p>\n<p>Now suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true. Since <span class=\"math-container\">$Q(m)$</span> holds then <span class=\"math-container\">$P(m')$</span> for each natural number <span class=\"math-container\">$m'$</span> less than or equal to <span class=\"math-container\">$m_0$</span> and less than <span class=\"math-container\">$m$</span>, and by our assumption <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true. Thus <span class=\"math-container\">$P(m')$</span> hold for each natural number <span class=\"math-container\">$m'$</span> such that <span class=\"math-container\">$m_0\\geq m' \\leq m$</span> or rather <span class=\"math-container\">$m_0\\geq m' < m++$</span> which is the definition of <span class=\"math-container\">$Q(m++)$</span>, therefore we have closed the induction.</p>\n<p>Since we have proven <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall n\\geq m_0\\; Q(n)$</span> and <span class=\"math-container\">$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)$</span>, we have equivalently proven (\\ref{*})\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h2>Conclusion.</h2>\n<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>\n"
},
{
"answer_id": 2103918,
"score": 0,
"text": "<p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>\n\n<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be \na natural number, and let $P(m)$ be a property pertaining to an \narbitrary natural number $m$ <strong>such that</strong> for each $m \\geq m_0$, we \nhave the following implication: if $P(m')$ is true for all natural \nnumbers $m_0 \\leq m' < m$, then $P(m)$ is also true. (In particular, \nthis means that $P(m_0)$ is true since in this case, the hypothesis is vacuous\n.) Then we can conclude that $P(m)$ is true for all natural \nnumbers $m\\geq m_0.$</p>\n\n<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\\leq m<n$ and $P(n)$. </p>\n\n<p>This is what I think, but maybe I am wrong. </p>\n"
},
{
"answer_id": 3127180,
"score": 2,
"text": "<p>You've probably figured this out by now but here's my take:</p>\n\n<p>We wanna show <span class=\"math-container\">$Q(n)$</span> is true for <span class=\"math-container\">$n\\ge m_0$</span> given <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> by induction on <span class=\"math-container\">$n$</span>.</p>\n\n<p>Let <span class=\"math-container\">$n\\le m_0$</span>. Then <span class=\"math-container\">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class=\"math-container\">$Q(n)$</span> is true for some <span class=\"math-container\">$n\\ge m_0$</span>. Well <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> so <span class=\"math-container\">$P(n)$</span> is true. This means that <span class=\"math-container\">$P(m')$</span> is true for <span class=\"math-container\">$m_0\\le m'<n+1$</span> which is another way of saying <span class=\"math-container\">$Q(n+1)$</span>. So the induction is closed, and <span class=\"math-container\">$Q(n)$</span> for every <span class=\"math-container\">$n\\ge m_0$</span>.</p>\n\n<p>I was confused similarly. The trick is that we need to suppose <span class=\"math-container\">$Q(n)\\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class=\"math-container\">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>\n"
},
{
"answer_id": 2103941,
"score": 1,
"text": "<p>$Q(m_0)$ is vacuously true.</p>\n\n<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\\le m<n$</p>\n\n<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>\n\n<p>\"if $P(m)$ is true for all natural \nnumbers $m_0 \\leq m < n$, then $P(n)$ is also true.\"</p>\n\n<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>\n\n<p>$P(n)$ is true for all $m_0\\le m<n+1$</p>\n\n<p>So $Q(n+1)$ is true.</p>\n"
},
{
"answer_id": 4113331,
"score": 1,
"text": "<h2>Slightly long approach.</h2>\n<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>\n<p>I have seen a fair bit of some common errors in many answers, one being that <span class=\"math-container\">$m=0$</span> has been used in the base case of induction proofs and not <span class=\"math-container\">$m=m_0$</span>.</p>\n<h3>Motivation for this late post.</h3>\n<p>Most of the answers I have seen used Terence's hint of defining <span class=\"math-container\">$Q(n)$</span> to be the property that <span class=\"math-container\">$P(m)$</span> is true for all <span class=\"math-container\">$m_0\\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; P(m)\\tag{*}\\label{*}$$</span></p>\n<p>Many for the answers I have seen the proofs end up taking the form</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(\\forall m_0\\leq m' < m\\; P(m')) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)$$</span></p>\n<p>Which can be rewritten in terms of our definition of <span class=\"math-container\">$Q(n)$</span> as</p>\n<p><span class=\"math-container\">$$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall m\\geq m_0\\; Q(m)\\label{1}\\tag{1}$$</span></p>\n<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>\n<p><span class=\"math-container\">$$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)\\tag{2}\\label{2}$$</span></p>\n<p>So if we approach the proof by (\\ref{1}) after proving (\\ref{2}), it would the the same as proving (\\ref{*}). But why would we rather want to use the right hand side of (\\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>\n<h3>Difference in induction hypotheses.</h3>\n<p>For a implication such as (\\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>\n<p>In (\\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class=\"math-container\">$Q(m_0)$</span> (<em>base case</em>) and <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>. The left hand side of (\\ref{1}) is really beneficial since that helps in proving <span class=\"math-container\">$(Q(m)\\Rightarrow Q(m++))$</span>.</p>\n<p>We can show that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class=\"math-container\">$\\forall m\\leq m_0 Q(n)$</span> would be something along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for some arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true.</p>\n</blockquote>\n<p>Here our goal becomes to show that <span class=\"math-container\">$Q(m++)$</span> or rather <span class=\"math-container\">$(\\forall m_0\\leq m' < m++\\; P(m'))$</span> is true, we will do this by showing that <span class=\"math-container\">$(Q(m)\\Rightarrow P(m))\\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class=\"math-container\">$Q(m)$</span> that <span class=\"math-container\">$(\\forall m_0\\leq m' < m\\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>\n<p>Had we tried to prove <span class=\"math-container\">$\\forall m\\geq m_0\\; P(m)$</span> instead as the right hand side of (\\ref{1}), then our induction hypothesis would be along the lines of</p>\n<blockquote>\n<p>Suppose inductively that <span class=\"math-container\">$P(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show <span class=\"math-container\">$P(m++)$</span> is true.</p>\n</blockquote>\n<p>This inductive hypothesis gives us that <span class=\"math-container\">$P('m)$</span> is true but we can't really use the original hypothesis <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> much since that would require us to show that <span class=\"math-container\">$Q(m++)$</span> to infer (by modus ponens) that <span class=\"math-container\">$P(m++)$</span> is true. For that we would need to know that <span class=\"math-container\">$P(m')$</span> holds true for each natural number <span class=\"math-container\">$m'$</span> greater than or equal to <span class=\"math-container\">$m_0$</span> but less than <span class=\"math-container\">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class=\"math-container\">$\\forall m\\geq m_0 Q(m)$</span>.</p>\n<h1>Tl;dr: the proof</h1>\n<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>\n<h3>Some what of a lemma,</h3>\n<p>Now we aim to prove (\\ref{2}), <span class=\"math-container\">$(\\Rightarrow)$</span> Suppose that <span class=\"math-container\">$\\forall m\\geq m_0\\;P(m)$</span>. Let <span class=\"math-container\">$n$</span> be a arbitrary natural number such that <span class=\"math-container\">$n\\geq m_0$</span>, we aim to prove <span class=\"math-container\">$Q(n)$</span> to be true which by definition amount to proving that <span class=\"math-container\">$\\forall m_0\\leq m' < n\\; P(m')$</span> is true, which follows trivially since by assumption <span class=\"math-container\">$P(k)$</span> is true for all natural numbers <span class=\"math-container\">$k\\geq m_0$</span>, thus <span class=\"math-container\">$Q(n)$</span> is true.</p>\n<p>Suppose that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, and let <span class=\"math-container\">$m$</span> be a arbitrary natural number such that <span class=\"math-container\">$m\\geq m_0$</span>. We aim to show that <span class=\"math-container\">$P(m)$</span> is true, since <span class=\"math-container\">$m\\geq m_0$</span> and <span class=\"math-container\">$m++ > m$</span> (this can be proven) then by transitivity <span class=\"math-container\">$m++ > m_0$</span> thus by our hypothesis <span class=\"math-container\">$Q(m++)$</span> and by definition of <span class=\"math-container\">$Q(m++)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true.\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h3>The main proof.</h3>\n<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\\ref{1}) using many of the facts we have already laid out.</p>\n<p>Suppose <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span>,\nthat is for each <span class=\"math-container\">$m\\geq m_0$</span> that if <span class=\"math-container\">$Q(m)$</span> is true then <span class=\"math-container\">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class=\"math-container\">$\\forall n\\geq m_0\\; Q(n)$</span>, for the base case let <span class=\"math-container\">$n=m_0$</span> we then aim to prove that <span class=\"math-container\">$Q(m_0)$</span> is true. By definition we aim to show <span class=\"math-container\">$\\forall m_0\\geq m' < m_0\\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class=\"math-container\">$m_0 < m'$</span> or <span class=\"math-container\">$m_0 =m$</span> and at the same time <span class=\"math-container\">$m'<m_0$</span> this this is vacuously true, thus <span class=\"math-container\">$Q(m_0)$</span> is vacuously true.</p>\n<p>Now suppose inductively that <span class=\"math-container\">$Q(m)$</span> is true for a arbitrary natural number <span class=\"math-container\">$m\\geq m_0$</span>, we aim to show that <span class=\"math-container\">$Q(m++)$</span> is true. Since <span class=\"math-container\">$Q(m)$</span> holds then <span class=\"math-container\">$P(m')$</span> for each natural number <span class=\"math-container\">$m'$</span> less than or equal to <span class=\"math-container\">$m_0$</span> and less than <span class=\"math-container\">$m$</span>, and by our assumption <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right)$</span> we can infer that <span class=\"math-container\">$P(m)$</span> is true. Thus <span class=\"math-container\">$P(m')$</span> hold for each natural number <span class=\"math-container\">$m'$</span> such that <span class=\"math-container\">$m_0\\geq m' \\leq m$</span> or rather <span class=\"math-container\">$m_0\\geq m' < m++$</span> which is the definition of <span class=\"math-container\">$Q(m++)$</span>, therefore we have closed the induction.</p>\n<p>Since we have proven <span class=\"math-container\">$\\left(\\forall m\\geq m_0 \\left[(Q(m) \\Rightarrow P(m)\\right]\\right) \\Rightarrow \\forall n\\geq m_0\\; Q(n)$</span> and <span class=\"math-container\">$\\forall m\\geq m_0 P(m) \\iff \\forall n\\geq m_0 Q(n)$</span>, we have equivalently proven (\\ref{*})\n<span class=\"math-container\">$$\\tag*{$\\blacksquare$}$$</span></p>\n<h2>Conclusion.</h2>\n<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>\n"
},
{
"answer_id": 2103918,
"score": 0,
"text": "<p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>\n\n<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be \na natural number, and let $P(m)$ be a property pertaining to an \narbitrary natural number $m$ <strong>such that</strong> for each $m \\geq m_0$, we \nhave the following implication: if $P(m')$ is true for all natural \nnumbers $m_0 \\leq m' < m$, then $P(m)$ is also true. (In particular, \nthis means that $P(m_0)$ is true since in this case, the hypothesis is vacuous\n.) Then we can conclude that $P(m)$ is true for all natural \nnumbers $m\\geq m_0.$</p>\n\n<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\\leq m<n$ and $P(n)$. </p>\n\n<p>This is what I think, but maybe I am wrong. </p>\n"
}
] | Question: How to prove proposition $2.2.14$ in Tao Analysis $I$?
Answer 1: <p>You've probably figured this out by now but here's my take:</p>
<p>We wanna show <span class="math-container">$Q(n)$</span> is true for <span class="math-container">$n\ge m_0$</span> given <span class="math-container">$Q(n)\Rightarrow P(n)$</span> by induction on <span class="math-container">$n$</span>.</p>
<p>Let <span class="math-container">$n\le m_0$</span>. Then <span class="math-container">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class="math-container">$Q(n)$</span> is true for some <span class="math-container">$n\ge m_0$</span>. Well <span class="math-container">$Q(n)\Rightarrow P(n)$</span> so <span class="math-container">$P(n)$</span> is true. This means that <span class="math-container">$P(m')$</span> is true for <span class="math-container">$m_0\le m'<n+1$</span> which is another way of saying <span class="math-container">$Q(n+1)$</span>. So the induction is closed, and <span class="math-container">$Q(n)$</span> for every <span class="math-container">$n\ge m_0$</span>.</p>
<p>I was confused similarly. The trick is that we need to suppose <span class="math-container">$Q(n)\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class="math-container">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>
Answer 2: <p>$Q(m_0)$ is vacuously true.</p>
<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\le m<n$</p>
<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>
<p>"if $P(m)$ is true for all natural
numbers $m_0 \leq m < n$, then $P(n)$ is also true."</p>
<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>
<p>$P(n)$ is true for all $m_0\le m<n+1$</p>
<p>So $Q(n+1)$ is true.</p>
Answer 3: <h2>Slightly long approach.</h2>
<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>
<p>I have seen a fair bit of some common errors in many answers, one being that <span class="math-container">$m=0$</span> has been used in the base case of induction proofs and not <span class="math-container">$m=m_0$</span>.</p>
<h3>Motivation for this late post.</h3>
<p>Most of the answers I have seen used Terence's hint of defining <span class="math-container">$Q(n)$</span> to be the property that <span class="math-container">$P(m)$</span> is true for all <span class="math-container">$m_0\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; P(m)\tag{*}\label{*}$$</span></p>
<p>Many for the answers I have seen the proofs end up taking the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)$$</span></p>
<p>Which can be rewritten in terms of our definition of <span class="math-container">$Q(n)$</span> as</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)\label{1}\tag{1}$$</span></p>
<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>
<p><span class="math-container">$$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)\tag{2}\label{2}$$</span></p>
<p>So if we approach the proof by (\ref{1}) after proving (\ref{2}), it would the the same as proving (\ref{*}). But why would we rather want to use the right hand side of (\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>
<h3>Difference in induction hypotheses.</h3>
<p>For a implication such as (\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>
<p>In (\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class="math-container">$Q(m_0)$</span> (<em>base case</em>) and <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>. The left hand side of (\ref{1}) is really beneficial since that helps in proving <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>.</p>
<p>We can show that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> would be something along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$Q(m)$</span> is true for some arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true.</p>
</blockquote>
<p>Here our goal becomes to show that <span class="math-container">$Q(m++)$</span> or rather <span class="math-container">$(\forall m_0\leq m' < m++\; P(m'))$</span> is true, we will do this by showing that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class="math-container">$Q(m)$</span> that <span class="math-container">$(\forall m_0\leq m' < m\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>
<p>Had we tried to prove <span class="math-container">$\forall m\geq m_0\; P(m)$</span> instead as the right hand side of (\ref{1}), then our induction hypothesis would be along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$P(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show <span class="math-container">$P(m++)$</span> is true.</p>
</blockquote>
<p>This inductive hypothesis gives us that <span class="math-container">$P('m)$</span> is true but we can't really use the original hypothesis <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> much since that would require us to show that <span class="math-container">$Q(m++)$</span> to infer (by modus ponens) that <span class="math-container">$P(m++)$</span> is true. For that we would need to know that <span class="math-container">$P(m')$</span> holds true for each natural number <span class="math-container">$m'$</span> greater than or equal to <span class="math-container">$m_0$</span> but less than <span class="math-container">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class="math-container">$\forall m\geq m_0 Q(m)$</span>.</p>
<h1>Tl;dr: the proof</h1>
<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>
<h3>Some what of a lemma,</h3>
<p>Now we aim to prove (\ref{2}), <span class="math-container">$(\Rightarrow)$</span> Suppose that <span class="math-container">$\forall m\geq m_0\;P(m)$</span>. Let <span class="math-container">$n$</span> be a arbitrary natural number such that <span class="math-container">$n\geq m_0$</span>, we aim to prove <span class="math-container">$Q(n)$</span> to be true which by definition amount to proving that <span class="math-container">$\forall m_0\leq m' < n\; P(m')$</span> is true, which follows trivially since by assumption <span class="math-container">$P(k)$</span> is true for all natural numbers <span class="math-container">$k\geq m_0$</span>, thus <span class="math-container">$Q(n)$</span> is true.</p>
<p>Suppose that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, and let <span class="math-container">$m$</span> be a arbitrary natural number such that <span class="math-container">$m\geq m_0$</span>. We aim to show that <span class="math-container">$P(m)$</span> is true, since <span class="math-container">$m\geq m_0$</span> and <span class="math-container">$m++ > m$</span> (this can be proven) then by transitivity <span class="math-container">$m++ > m_0$</span> thus by our hypothesis <span class="math-container">$Q(m++)$</span> and by definition of <span class="math-container">$Q(m++)$</span> we can infer that <span class="math-container">$P(m)$</span> is true.
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h3>The main proof.</h3>
<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\ref{1}) using many of the facts we have already laid out.</p>
<p>Suppose <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span>,
that is for each <span class="math-container">$m\geq m_0$</span> that if <span class="math-container">$Q(m)$</span> is true then <span class="math-container">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, for the base case let <span class="math-container">$n=m_0$</span> we then aim to prove that <span class="math-container">$Q(m_0)$</span> is true. By definition we aim to show <span class="math-container">$\forall m_0\geq m' < m_0\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class="math-container">$m_0 < m'$</span> or <span class="math-container">$m_0 =m$</span> and at the same time <span class="math-container">$m'<m_0$</span> this this is vacuously true, thus <span class="math-container">$Q(m_0)$</span> is vacuously true.</p>
<p>Now suppose inductively that <span class="math-container">$Q(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true. Since <span class="math-container">$Q(m)$</span> holds then <span class="math-container">$P(m')$</span> for each natural number <span class="math-container">$m'$</span> less than or equal to <span class="math-container">$m_0$</span> and less than <span class="math-container">$m$</span>, and by our assumption <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> we can infer that <span class="math-container">$P(m)$</span> is true. Thus <span class="math-container">$P(m')$</span> hold for each natural number <span class="math-container">$m'$</span> such that <span class="math-container">$m_0\geq m' \leq m$</span> or rather <span class="math-container">$m_0\geq m' < m++$</span> which is the definition of <span class="math-container">$Q(m++)$</span>, therefore we have closed the induction.</p>
<p>Since we have proven <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall n\geq m_0\; Q(n)$</span> and <span class="math-container">$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)$</span>, we have equivalently proven (\ref{*})
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h2>Conclusion.</h2>
<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>
Answer 4: <p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>
<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$ <strong>such that</strong> for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$</p>
<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\leq m<n$ and $P(n)$. </p>
<p>This is what I think, but maybe I am wrong. </p>
Answer 5: <p>You've probably figured this out by now but here's my take:</p>
<p>We wanna show <span class="math-container">$Q(n)$</span> is true for <span class="math-container">$n\ge m_0$</span> given <span class="math-container">$Q(n)\Rightarrow P(n)$</span> by induction on <span class="math-container">$n$</span>.</p>
<p>Let <span class="math-container">$n\le m_0$</span>. Then <span class="math-container">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class="math-container">$Q(n)$</span> is true for some <span class="math-container">$n\ge m_0$</span>. Well <span class="math-container">$Q(n)\Rightarrow P(n)$</span> so <span class="math-container">$P(n)$</span> is true. This means that <span class="math-container">$P(m')$</span> is true for <span class="math-container">$m_0\le m'<n+1$</span> which is another way of saying <span class="math-container">$Q(n+1)$</span>. So the induction is closed, and <span class="math-container">$Q(n)$</span> for every <span class="math-container">$n\ge m_0$</span>.</p>
<p>I was confused similarly. The trick is that we need to suppose <span class="math-container">$Q(n)\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class="math-container">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>
Answer 6: <p>$Q(m_0)$ is vacuously true.</p>
<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\le m<n$</p>
<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>
<p>"if $P(m)$ is true for all natural
numbers $m_0 \leq m < n$, then $P(n)$ is also true."</p>
<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>
<p>$P(n)$ is true for all $m_0\le m<n+1$</p>
<p>So $Q(n+1)$ is true.</p>
Answer 7: <h2>Slightly long approach.</h2>
<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>
<p>I have seen a fair bit of some common errors in many answers, one being that <span class="math-container">$m=0$</span> has been used in the base case of induction proofs and not <span class="math-container">$m=m_0$</span>.</p>
<h3>Motivation for this late post.</h3>
<p>Most of the answers I have seen used Terence's hint of defining <span class="math-container">$Q(n)$</span> to be the property that <span class="math-container">$P(m)$</span> is true for all <span class="math-container">$m_0\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; P(m)\tag{*}\label{*}$$</span></p>
<p>Many for the answers I have seen the proofs end up taking the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)$$</span></p>
<p>Which can be rewritten in terms of our definition of <span class="math-container">$Q(n)$</span> as</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)\label{1}\tag{1}$$</span></p>
<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>
<p><span class="math-container">$$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)\tag{2}\label{2}$$</span></p>
<p>So if we approach the proof by (\ref{1}) after proving (\ref{2}), it would the the same as proving (\ref{*}). But why would we rather want to use the right hand side of (\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>
<h3>Difference in induction hypotheses.</h3>
<p>For a implication such as (\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>
<p>In (\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class="math-container">$Q(m_0)$</span> (<em>base case</em>) and <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>. The left hand side of (\ref{1}) is really beneficial since that helps in proving <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>.</p>
<p>We can show that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> would be something along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$Q(m)$</span> is true for some arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true.</p>
</blockquote>
<p>Here our goal becomes to show that <span class="math-container">$Q(m++)$</span> or rather <span class="math-container">$(\forall m_0\leq m' < m++\; P(m'))$</span> is true, we will do this by showing that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class="math-container">$Q(m)$</span> that <span class="math-container">$(\forall m_0\leq m' < m\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>
<p>Had we tried to prove <span class="math-container">$\forall m\geq m_0\; P(m)$</span> instead as the right hand side of (\ref{1}), then our induction hypothesis would be along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$P(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show <span class="math-container">$P(m++)$</span> is true.</p>
</blockquote>
<p>This inductive hypothesis gives us that <span class="math-container">$P('m)$</span> is true but we can't really use the original hypothesis <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> much since that would require us to show that <span class="math-container">$Q(m++)$</span> to infer (by modus ponens) that <span class="math-container">$P(m++)$</span> is true. For that we would need to know that <span class="math-container">$P(m')$</span> holds true for each natural number <span class="math-container">$m'$</span> greater than or equal to <span class="math-container">$m_0$</span> but less than <span class="math-container">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class="math-container">$\forall m\geq m_0 Q(m)$</span>.</p>
<h1>Tl;dr: the proof</h1>
<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>
<h3>Some what of a lemma,</h3>
<p>Now we aim to prove (\ref{2}), <span class="math-container">$(\Rightarrow)$</span> Suppose that <span class="math-container">$\forall m\geq m_0\;P(m)$</span>. Let <span class="math-container">$n$</span> be a arbitrary natural number such that <span class="math-container">$n\geq m_0$</span>, we aim to prove <span class="math-container">$Q(n)$</span> to be true which by definition amount to proving that <span class="math-container">$\forall m_0\leq m' < n\; P(m')$</span> is true, which follows trivially since by assumption <span class="math-container">$P(k)$</span> is true for all natural numbers <span class="math-container">$k\geq m_0$</span>, thus <span class="math-container">$Q(n)$</span> is true.</p>
<p>Suppose that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, and let <span class="math-container">$m$</span> be a arbitrary natural number such that <span class="math-container">$m\geq m_0$</span>. We aim to show that <span class="math-container">$P(m)$</span> is true, since <span class="math-container">$m\geq m_0$</span> and <span class="math-container">$m++ > m$</span> (this can be proven) then by transitivity <span class="math-container">$m++ > m_0$</span> thus by our hypothesis <span class="math-container">$Q(m++)$</span> and by definition of <span class="math-container">$Q(m++)$</span> we can infer that <span class="math-container">$P(m)$</span> is true.
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h3>The main proof.</h3>
<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\ref{1}) using many of the facts we have already laid out.</p>
<p>Suppose <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span>,
that is for each <span class="math-container">$m\geq m_0$</span> that if <span class="math-container">$Q(m)$</span> is true then <span class="math-container">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, for the base case let <span class="math-container">$n=m_0$</span> we then aim to prove that <span class="math-container">$Q(m_0)$</span> is true. By definition we aim to show <span class="math-container">$\forall m_0\geq m' < m_0\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class="math-container">$m_0 < m'$</span> or <span class="math-container">$m_0 =m$</span> and at the same time <span class="math-container">$m'<m_0$</span> this this is vacuously true, thus <span class="math-container">$Q(m_0)$</span> is vacuously true.</p>
<p>Now suppose inductively that <span class="math-container">$Q(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true. Since <span class="math-container">$Q(m)$</span> holds then <span class="math-container">$P(m')$</span> for each natural number <span class="math-container">$m'$</span> less than or equal to <span class="math-container">$m_0$</span> and less than <span class="math-container">$m$</span>, and by our assumption <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> we can infer that <span class="math-container">$P(m)$</span> is true. Thus <span class="math-container">$P(m')$</span> hold for each natural number <span class="math-container">$m'$</span> such that <span class="math-container">$m_0\geq m' \leq m$</span> or rather <span class="math-container">$m_0\geq m' < m++$</span> which is the definition of <span class="math-container">$Q(m++)$</span>, therefore we have closed the induction.</p>
<p>Since we have proven <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall n\geq m_0\; Q(n)$</span> and <span class="math-container">$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)$</span>, we have equivalently proven (\ref{*})
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h2>Conclusion.</h2>
<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>
Answer 8: <p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>
<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$ <strong>such that</strong> for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$</p>
<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\leq m<n$ and $P(n)$. </p>
<p>This is what I think, but maybe I am wrong. </p>
Answer 9: <p>You've probably figured this out by now but here's my take:</p>
<p>We wanna show <span class="math-container">$Q(n)$</span> is true for <span class="math-container">$n\ge m_0$</span> given <span class="math-container">$Q(n)\Rightarrow P(n)$</span> by induction on <span class="math-container">$n$</span>.</p>
<p>Let <span class="math-container">$n\le m_0$</span>. Then <span class="math-container">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class="math-container">$Q(n)$</span> is true for some <span class="math-container">$n\ge m_0$</span>. Well <span class="math-container">$Q(n)\Rightarrow P(n)$</span> so <span class="math-container">$P(n)$</span> is true. This means that <span class="math-container">$P(m')$</span> is true for <span class="math-container">$m_0\le m'<n+1$</span> which is another way of saying <span class="math-container">$Q(n+1)$</span>. So the induction is closed, and <span class="math-container">$Q(n)$</span> for every <span class="math-container">$n\ge m_0$</span>.</p>
<p>I was confused similarly. The trick is that we need to suppose <span class="math-container">$Q(n)\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class="math-container">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>
Answer 10: <p>$Q(m_0)$ is vacuously true.</p>
<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\le m<n$</p>
<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>
<p>"if $P(m)$ is true for all natural
numbers $m_0 \leq m < n$, then $P(n)$ is also true."</p>
<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>
<p>$P(n)$ is true for all $m_0\le m<n+1$</p>
<p>So $Q(n+1)$ is true.</p>
Answer 11: <h2>Slightly long approach.</h2>
<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>
<p>I have seen a fair bit of some common errors in many answers, one being that <span class="math-container">$m=0$</span> has been used in the base case of induction proofs and not <span class="math-container">$m=m_0$</span>.</p>
<h3>Motivation for this late post.</h3>
<p>Most of the answers I have seen used Terence's hint of defining <span class="math-container">$Q(n)$</span> to be the property that <span class="math-container">$P(m)$</span> is true for all <span class="math-container">$m_0\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; P(m)\tag{*}\label{*}$$</span></p>
<p>Many for the answers I have seen the proofs end up taking the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)$$</span></p>
<p>Which can be rewritten in terms of our definition of <span class="math-container">$Q(n)$</span> as</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)\label{1}\tag{1}$$</span></p>
<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>
<p><span class="math-container">$$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)\tag{2}\label{2}$$</span></p>
<p>So if we approach the proof by (\ref{1}) after proving (\ref{2}), it would the the same as proving (\ref{*}). But why would we rather want to use the right hand side of (\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>
<h3>Difference in induction hypotheses.</h3>
<p>For a implication such as (\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>
<p>In (\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class="math-container">$Q(m_0)$</span> (<em>base case</em>) and <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>. The left hand side of (\ref{1}) is really beneficial since that helps in proving <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>.</p>
<p>We can show that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> would be something along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$Q(m)$</span> is true for some arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true.</p>
</blockquote>
<p>Here our goal becomes to show that <span class="math-container">$Q(m++)$</span> or rather <span class="math-container">$(\forall m_0\leq m' < m++\; P(m'))$</span> is true, we will do this by showing that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class="math-container">$Q(m)$</span> that <span class="math-container">$(\forall m_0\leq m' < m\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>
<p>Had we tried to prove <span class="math-container">$\forall m\geq m_0\; P(m)$</span> instead as the right hand side of (\ref{1}), then our induction hypothesis would be along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$P(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show <span class="math-container">$P(m++)$</span> is true.</p>
</blockquote>
<p>This inductive hypothesis gives us that <span class="math-container">$P('m)$</span> is true but we can't really use the original hypothesis <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> much since that would require us to show that <span class="math-container">$Q(m++)$</span> to infer (by modus ponens) that <span class="math-container">$P(m++)$</span> is true. For that we would need to know that <span class="math-container">$P(m')$</span> holds true for each natural number <span class="math-container">$m'$</span> greater than or equal to <span class="math-container">$m_0$</span> but less than <span class="math-container">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class="math-container">$\forall m\geq m_0 Q(m)$</span>.</p>
<h1>Tl;dr: the proof</h1>
<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>
<h3>Some what of a lemma,</h3>
<p>Now we aim to prove (\ref{2}), <span class="math-container">$(\Rightarrow)$</span> Suppose that <span class="math-container">$\forall m\geq m_0\;P(m)$</span>. Let <span class="math-container">$n$</span> be a arbitrary natural number such that <span class="math-container">$n\geq m_0$</span>, we aim to prove <span class="math-container">$Q(n)$</span> to be true which by definition amount to proving that <span class="math-container">$\forall m_0\leq m' < n\; P(m')$</span> is true, which follows trivially since by assumption <span class="math-container">$P(k)$</span> is true for all natural numbers <span class="math-container">$k\geq m_0$</span>, thus <span class="math-container">$Q(n)$</span> is true.</p>
<p>Suppose that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, and let <span class="math-container">$m$</span> be a arbitrary natural number such that <span class="math-container">$m\geq m_0$</span>. We aim to show that <span class="math-container">$P(m)$</span> is true, since <span class="math-container">$m\geq m_0$</span> and <span class="math-container">$m++ > m$</span> (this can be proven) then by transitivity <span class="math-container">$m++ > m_0$</span> thus by our hypothesis <span class="math-container">$Q(m++)$</span> and by definition of <span class="math-container">$Q(m++)$</span> we can infer that <span class="math-container">$P(m)$</span> is true.
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h3>The main proof.</h3>
<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\ref{1}) using many of the facts we have already laid out.</p>
<p>Suppose <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span>,
that is for each <span class="math-container">$m\geq m_0$</span> that if <span class="math-container">$Q(m)$</span> is true then <span class="math-container">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, for the base case let <span class="math-container">$n=m_0$</span> we then aim to prove that <span class="math-container">$Q(m_0)$</span> is true. By definition we aim to show <span class="math-container">$\forall m_0\geq m' < m_0\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class="math-container">$m_0 < m'$</span> or <span class="math-container">$m_0 =m$</span> and at the same time <span class="math-container">$m'<m_0$</span> this this is vacuously true, thus <span class="math-container">$Q(m_0)$</span> is vacuously true.</p>
<p>Now suppose inductively that <span class="math-container">$Q(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true. Since <span class="math-container">$Q(m)$</span> holds then <span class="math-container">$P(m')$</span> for each natural number <span class="math-container">$m'$</span> less than or equal to <span class="math-container">$m_0$</span> and less than <span class="math-container">$m$</span>, and by our assumption <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> we can infer that <span class="math-container">$P(m)$</span> is true. Thus <span class="math-container">$P(m')$</span> hold for each natural number <span class="math-container">$m'$</span> such that <span class="math-container">$m_0\geq m' \leq m$</span> or rather <span class="math-container">$m_0\geq m' < m++$</span> which is the definition of <span class="math-container">$Q(m++)$</span>, therefore we have closed the induction.</p>
<p>Since we have proven <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall n\geq m_0\; Q(n)$</span> and <span class="math-container">$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)$</span>, we have equivalently proven (\ref{*})
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h2>Conclusion.</h2>
<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>
Answer 12: <p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>
<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$ <strong>such that</strong> for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$</p>
<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\leq m<n$ and $P(n)$. </p>
<p>This is what I think, but maybe I am wrong. </p>
Answer 13: <p>You've probably figured this out by now but here's my take:</p>
<p>We wanna show <span class="math-container">$Q(n)$</span> is true for <span class="math-container">$n\ge m_0$</span> given <span class="math-container">$Q(n)\Rightarrow P(n)$</span> by induction on <span class="math-container">$n$</span>.</p>
<p>Let <span class="math-container">$n\le m_0$</span>. Then <span class="math-container">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class="math-container">$Q(n)$</span> is true for some <span class="math-container">$n\ge m_0$</span>. Well <span class="math-container">$Q(n)\Rightarrow P(n)$</span> so <span class="math-container">$P(n)$</span> is true. This means that <span class="math-container">$P(m')$</span> is true for <span class="math-container">$m_0\le m'<n+1$</span> which is another way of saying <span class="math-container">$Q(n+1)$</span>. So the induction is closed, and <span class="math-container">$Q(n)$</span> for every <span class="math-container">$n\ge m_0$</span>.</p>
<p>I was confused similarly. The trick is that we need to suppose <span class="math-container">$Q(n)\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class="math-container">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>
Answer 14: <p>$Q(m_0)$ is vacuously true.</p>
<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\le m<n$</p>
<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>
<p>"if $P(m)$ is true for all natural
numbers $m_0 \leq m < n$, then $P(n)$ is also true."</p>
<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>
<p>$P(n)$ is true for all $m_0\le m<n+1$</p>
<p>So $Q(n+1)$ is true.</p>
Answer 15: <h2>Slightly long approach.</h2>
<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>
<p>I have seen a fair bit of some common errors in many answers, one being that <span class="math-container">$m=0$</span> has been used in the base case of induction proofs and not <span class="math-container">$m=m_0$</span>.</p>
<h3>Motivation for this late post.</h3>
<p>Most of the answers I have seen used Terence's hint of defining <span class="math-container">$Q(n)$</span> to be the property that <span class="math-container">$P(m)$</span> is true for all <span class="math-container">$m_0\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; P(m)\tag{*}\label{*}$$</span></p>
<p>Many for the answers I have seen the proofs end up taking the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)$$</span></p>
<p>Which can be rewritten in terms of our definition of <span class="math-container">$Q(n)$</span> as</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)\label{1}\tag{1}$$</span></p>
<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>
<p><span class="math-container">$$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)\tag{2}\label{2}$$</span></p>
<p>So if we approach the proof by (\ref{1}) after proving (\ref{2}), it would the the same as proving (\ref{*}). But why would we rather want to use the right hand side of (\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>
<h3>Difference in induction hypotheses.</h3>
<p>For a implication such as (\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>
<p>In (\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class="math-container">$Q(m_0)$</span> (<em>base case</em>) and <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>. The left hand side of (\ref{1}) is really beneficial since that helps in proving <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>.</p>
<p>We can show that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> would be something along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$Q(m)$</span> is true for some arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true.</p>
</blockquote>
<p>Here our goal becomes to show that <span class="math-container">$Q(m++)$</span> or rather <span class="math-container">$(\forall m_0\leq m' < m++\; P(m'))$</span> is true, we will do this by showing that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class="math-container">$Q(m)$</span> that <span class="math-container">$(\forall m_0\leq m' < m\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>
<p>Had we tried to prove <span class="math-container">$\forall m\geq m_0\; P(m)$</span> instead as the right hand side of (\ref{1}), then our induction hypothesis would be along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$P(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show <span class="math-container">$P(m++)$</span> is true.</p>
</blockquote>
<p>This inductive hypothesis gives us that <span class="math-container">$P('m)$</span> is true but we can't really use the original hypothesis <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> much since that would require us to show that <span class="math-container">$Q(m++)$</span> to infer (by modus ponens) that <span class="math-container">$P(m++)$</span> is true. For that we would need to know that <span class="math-container">$P(m')$</span> holds true for each natural number <span class="math-container">$m'$</span> greater than or equal to <span class="math-container">$m_0$</span> but less than <span class="math-container">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class="math-container">$\forall m\geq m_0 Q(m)$</span>.</p>
<h1>Tl;dr: the proof</h1>
<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>
<h3>Some what of a lemma,</h3>
<p>Now we aim to prove (\ref{2}), <span class="math-container">$(\Rightarrow)$</span> Suppose that <span class="math-container">$\forall m\geq m_0\;P(m)$</span>. Let <span class="math-container">$n$</span> be a arbitrary natural number such that <span class="math-container">$n\geq m_0$</span>, we aim to prove <span class="math-container">$Q(n)$</span> to be true which by definition amount to proving that <span class="math-container">$\forall m_0\leq m' < n\; P(m')$</span> is true, which follows trivially since by assumption <span class="math-container">$P(k)$</span> is true for all natural numbers <span class="math-container">$k\geq m_0$</span>, thus <span class="math-container">$Q(n)$</span> is true.</p>
<p>Suppose that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, and let <span class="math-container">$m$</span> be a arbitrary natural number such that <span class="math-container">$m\geq m_0$</span>. We aim to show that <span class="math-container">$P(m)$</span> is true, since <span class="math-container">$m\geq m_0$</span> and <span class="math-container">$m++ > m$</span> (this can be proven) then by transitivity <span class="math-container">$m++ > m_0$</span> thus by our hypothesis <span class="math-container">$Q(m++)$</span> and by definition of <span class="math-container">$Q(m++)$</span> we can infer that <span class="math-container">$P(m)$</span> is true.
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h3>The main proof.</h3>
<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\ref{1}) using many of the facts we have already laid out.</p>
<p>Suppose <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span>,
that is for each <span class="math-container">$m\geq m_0$</span> that if <span class="math-container">$Q(m)$</span> is true then <span class="math-container">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, for the base case let <span class="math-container">$n=m_0$</span> we then aim to prove that <span class="math-container">$Q(m_0)$</span> is true. By definition we aim to show <span class="math-container">$\forall m_0\geq m' < m_0\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class="math-container">$m_0 < m'$</span> or <span class="math-container">$m_0 =m$</span> and at the same time <span class="math-container">$m'<m_0$</span> this this is vacuously true, thus <span class="math-container">$Q(m_0)$</span> is vacuously true.</p>
<p>Now suppose inductively that <span class="math-container">$Q(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true. Since <span class="math-container">$Q(m)$</span> holds then <span class="math-container">$P(m')$</span> for each natural number <span class="math-container">$m'$</span> less than or equal to <span class="math-container">$m_0$</span> and less than <span class="math-container">$m$</span>, and by our assumption <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> we can infer that <span class="math-container">$P(m)$</span> is true. Thus <span class="math-container">$P(m')$</span> hold for each natural number <span class="math-container">$m'$</span> such that <span class="math-container">$m_0\geq m' \leq m$</span> or rather <span class="math-container">$m_0\geq m' < m++$</span> which is the definition of <span class="math-container">$Q(m++)$</span>, therefore we have closed the induction.</p>
<p>Since we have proven <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall n\geq m_0\; Q(n)$</span> and <span class="math-container">$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)$</span>, we have equivalently proven (\ref{*})
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h2>Conclusion.</h2>
<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>
Answer 16: <p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>
<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$ <strong>such that</strong> for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$</p>
<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\leq m<n$ and $P(n)$. </p>
<p>This is what I think, but maybe I am wrong. </p>
Answer 17: <p>You've probably figured this out by now but here's my take:</p>
<p>We wanna show <span class="math-container">$Q(n)$</span> is true for <span class="math-container">$n\ge m_0$</span> given <span class="math-container">$Q(n)\Rightarrow P(n)$</span> by induction on <span class="math-container">$n$</span>.</p>
<p>Let <span class="math-container">$n\le m_0$</span>. Then <span class="math-container">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class="math-container">$Q(n)$</span> is true for some <span class="math-container">$n\ge m_0$</span>. Well <span class="math-container">$Q(n)\Rightarrow P(n)$</span> so <span class="math-container">$P(n)$</span> is true. This means that <span class="math-container">$P(m')$</span> is true for <span class="math-container">$m_0\le m'<n+1$</span> which is another way of saying <span class="math-container">$Q(n+1)$</span>. So the induction is closed, and <span class="math-container">$Q(n)$</span> for every <span class="math-container">$n\ge m_0$</span>.</p>
<p>I was confused similarly. The trick is that we need to suppose <span class="math-container">$Q(n)\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class="math-container">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>
Answer 18: <p>$Q(m_0)$ is vacuously true.</p>
<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\le m<n$</p>
<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>
<p>"if $P(m)$ is true for all natural
numbers $m_0 \leq m < n$, then $P(n)$ is also true."</p>
<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>
<p>$P(n)$ is true for all $m_0\le m<n+1$</p>
<p>So $Q(n+1)$ is true.</p>
Answer 19: <h2>Slightly long approach.</h2>
<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>
<p>I have seen a fair bit of some common errors in many answers, one being that <span class="math-container">$m=0$</span> has been used in the base case of induction proofs and not <span class="math-container">$m=m_0$</span>.</p>
<h3>Motivation for this late post.</h3>
<p>Most of the answers I have seen used Terence's hint of defining <span class="math-container">$Q(n)$</span> to be the property that <span class="math-container">$P(m)$</span> is true for all <span class="math-container">$m_0\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; P(m)\tag{*}\label{*}$$</span></p>
<p>Many for the answers I have seen the proofs end up taking the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)$$</span></p>
<p>Which can be rewritten in terms of our definition of <span class="math-container">$Q(n)$</span> as</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)\label{1}\tag{1}$$</span></p>
<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>
<p><span class="math-container">$$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)\tag{2}\label{2}$$</span></p>
<p>So if we approach the proof by (\ref{1}) after proving (\ref{2}), it would the the same as proving (\ref{*}). But why would we rather want to use the right hand side of (\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>
<h3>Difference in induction hypotheses.</h3>
<p>For a implication such as (\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>
<p>In (\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class="math-container">$Q(m_0)$</span> (<em>base case</em>) and <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>. The left hand side of (\ref{1}) is really beneficial since that helps in proving <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>.</p>
<p>We can show that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> would be something along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$Q(m)$</span> is true for some arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true.</p>
</blockquote>
<p>Here our goal becomes to show that <span class="math-container">$Q(m++)$</span> or rather <span class="math-container">$(\forall m_0\leq m' < m++\; P(m'))$</span> is true, we will do this by showing that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class="math-container">$Q(m)$</span> that <span class="math-container">$(\forall m_0\leq m' < m\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>
<p>Had we tried to prove <span class="math-container">$\forall m\geq m_0\; P(m)$</span> instead as the right hand side of (\ref{1}), then our induction hypothesis would be along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$P(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show <span class="math-container">$P(m++)$</span> is true.</p>
</blockquote>
<p>This inductive hypothesis gives us that <span class="math-container">$P('m)$</span> is true but we can't really use the original hypothesis <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> much since that would require us to show that <span class="math-container">$Q(m++)$</span> to infer (by modus ponens) that <span class="math-container">$P(m++)$</span> is true. For that we would need to know that <span class="math-container">$P(m')$</span> holds true for each natural number <span class="math-container">$m'$</span> greater than or equal to <span class="math-container">$m_0$</span> but less than <span class="math-container">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class="math-container">$\forall m\geq m_0 Q(m)$</span>.</p>
<h1>Tl;dr: the proof</h1>
<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>
<h3>Some what of a lemma,</h3>
<p>Now we aim to prove (\ref{2}), <span class="math-container">$(\Rightarrow)$</span> Suppose that <span class="math-container">$\forall m\geq m_0\;P(m)$</span>. Let <span class="math-container">$n$</span> be a arbitrary natural number such that <span class="math-container">$n\geq m_0$</span>, we aim to prove <span class="math-container">$Q(n)$</span> to be true which by definition amount to proving that <span class="math-container">$\forall m_0\leq m' < n\; P(m')$</span> is true, which follows trivially since by assumption <span class="math-container">$P(k)$</span> is true for all natural numbers <span class="math-container">$k\geq m_0$</span>, thus <span class="math-container">$Q(n)$</span> is true.</p>
<p>Suppose that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, and let <span class="math-container">$m$</span> be a arbitrary natural number such that <span class="math-container">$m\geq m_0$</span>. We aim to show that <span class="math-container">$P(m)$</span> is true, since <span class="math-container">$m\geq m_0$</span> and <span class="math-container">$m++ > m$</span> (this can be proven) then by transitivity <span class="math-container">$m++ > m_0$</span> thus by our hypothesis <span class="math-container">$Q(m++)$</span> and by definition of <span class="math-container">$Q(m++)$</span> we can infer that <span class="math-container">$P(m)$</span> is true.
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h3>The main proof.</h3>
<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\ref{1}) using many of the facts we have already laid out.</p>
<p>Suppose <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span>,
that is for each <span class="math-container">$m\geq m_0$</span> that if <span class="math-container">$Q(m)$</span> is true then <span class="math-container">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, for the base case let <span class="math-container">$n=m_0$</span> we then aim to prove that <span class="math-container">$Q(m_0)$</span> is true. By definition we aim to show <span class="math-container">$\forall m_0\geq m' < m_0\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class="math-container">$m_0 < m'$</span> or <span class="math-container">$m_0 =m$</span> and at the same time <span class="math-container">$m'<m_0$</span> this this is vacuously true, thus <span class="math-container">$Q(m_0)$</span> is vacuously true.</p>
<p>Now suppose inductively that <span class="math-container">$Q(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true. Since <span class="math-container">$Q(m)$</span> holds then <span class="math-container">$P(m')$</span> for each natural number <span class="math-container">$m'$</span> less than or equal to <span class="math-container">$m_0$</span> and less than <span class="math-container">$m$</span>, and by our assumption <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> we can infer that <span class="math-container">$P(m)$</span> is true. Thus <span class="math-container">$P(m')$</span> hold for each natural number <span class="math-container">$m'$</span> such that <span class="math-container">$m_0\geq m' \leq m$</span> or rather <span class="math-container">$m_0\geq m' < m++$</span> which is the definition of <span class="math-container">$Q(m++)$</span>, therefore we have closed the induction.</p>
<p>Since we have proven <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall n\geq m_0\; Q(n)$</span> and <span class="math-container">$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)$</span>, we have equivalently proven (\ref{*})
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h2>Conclusion.</h2>
<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>
Answer 20: <p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>
<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$ <strong>such that</strong> for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$</p>
<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\leq m<n$ and $P(n)$. </p>
<p>This is what I think, but maybe I am wrong. </p>
Answer 21: <p>You've probably figured this out by now but here's my take:</p>
<p>We wanna show <span class="math-container">$Q(n)$</span> is true for <span class="math-container">$n\ge m_0$</span> given <span class="math-container">$Q(n)\Rightarrow P(n)$</span> by induction on <span class="math-container">$n$</span>.</p>
<p>Let <span class="math-container">$n\le m_0$</span>. Then <span class="math-container">$Q(n)$</span> is vacuously true. These are a bunch of base cases. Now suppose that <span class="math-container">$Q(n)$</span> is true for some <span class="math-container">$n\ge m_0$</span>. Well <span class="math-container">$Q(n)\Rightarrow P(n)$</span> so <span class="math-container">$P(n)$</span> is true. This means that <span class="math-container">$P(m')$</span> is true for <span class="math-container">$m_0\le m'<n+1$</span> which is another way of saying <span class="math-container">$Q(n+1)$</span>. So the induction is closed, and <span class="math-container">$Q(n)$</span> for every <span class="math-container">$n\ge m_0$</span>.</p>
<p>I was confused similarly. The trick is that we need to suppose <span class="math-container">$Q(n)\Rightarrow P(n)$</span> because that's what strong induction is all about, and the weak induction to show <span class="math-container">$Q(n)$</span> uses that fact but is kinda orthogonal to it.</p>
Answer 22: <p>$Q(m_0)$ is vacuously true.</p>
<p>Suppose Q(n) is true. This means that $P(n)$ is true for all $m_0\le m<n$</p>
<p>From the implication in the hypothesis, which can be rewritten (with a slight change of notation) as:</p>
<p>"if $P(m)$ is true for all natural
numbers $m_0 \leq m < n$, then $P(n)$ is also true."</p>
<p>it follows that $P(n)$ is true. As $P(m)$ is true for $m<n$, one has:</p>
<p>$P(n)$ is true for all $m_0\le m<n+1$</p>
<p>So $Q(n+1)$ is true.</p>
Answer 23: <h2>Slightly long approach.</h2>
<p>I spent some time trying to find an answer to this same question, while there are good answers out there. I still had some questions that I wanted further justification for and I write this answer for others that might stumble on the same search as I did.</p>
<p>I have seen a fair bit of some common errors in many answers, one being that <span class="math-container">$m=0$</span> has been used in the base case of induction proofs and not <span class="math-container">$m=m_0$</span>.</p>
<h3>Motivation for this late post.</h3>
<p>Most of the answers I have seen used Terence's hint of defining <span class="math-container">$Q(n)$</span> to be the property that <span class="math-container">$P(m)$</span> is true for all <span class="math-container">$m_0\leq m < n$</span>. For clarity transcribing <em>proposition 2.2.14</em> in logic symbol it takes the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; P(m)\tag{*}\label{*}$$</span></p>
<p>Many for the answers I have seen the proofs end up taking the form</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(\forall m_0\leq m' < m\; P(m')) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)$$</span></p>
<p>Which can be rewritten in terms of our definition of <span class="math-container">$Q(n)$</span> as</p>
<p><span class="math-container">$$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall m\geq m_0\; Q(m)\label{1}\tag{1}$$</span></p>
<p>And I think this is exactly what the hint is implying (or so I conjecture). But I think there is a crucial justification missing to keep the tone of the book up until this point. In Daniel J. Velleman's book <em>"How to prove it: A structured approach."</em> (3rd edition ), in chapter 6.4 exercise (1), he ask the reader to justify strong induction and in the first part of the exercise he ask the reader to show the following. (little changed to fit Terence's definition.)</p>
<p><span class="math-container">$$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)\tag{2}\label{2}$$</span></p>
<p>So if we approach the proof by (\ref{1}) after proving (\ref{2}), it would the the same as proving (\ref{*}). But why would we rather want to use the right hand side of (\ref{2}) rather than the left ? I will elaborate on that in the next section.</p>
<h3>Difference in induction hypotheses.</h3>
<p>For a implication such as (\ref{1}) for example, we start out with supposing the left hand of the implication (<em>the antecedent</em>) to be true and then use the most suitable proofing method to prove the right hand side of the implication (<em>the consequence</em>).</p>
<p>In (\ref{1}) the most suitable approach for the right hand side is proof by mathematical induction, proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> by standard induction amounts to proving <span class="math-container">$Q(m_0)$</span> (<em>base case</em>) and <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>. The left hand side of (\ref{1}) is really beneficial since that helps in proving <span class="math-container">$(Q(m)\Rightarrow Q(m++))$</span>.</p>
<p>We can show that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>, our induction hypothesis of proving <span class="math-container">$\forall m\leq m_0 Q(n)$</span> would be something along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$Q(m)$</span> is true for some arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true.</p>
</blockquote>
<p>Here our goal becomes to show that <span class="math-container">$Q(m++)$</span> or rather <span class="math-container">$(\forall m_0\leq m' < m++\; P(m'))$</span> is true, we will do this by showing that <span class="math-container">$(Q(m)\Rightarrow P(m))\Rightarrow Q(m++)$</span>. Essentially the inductive hypothesis gives us by definition of <span class="math-container">$Q(m)$</span> that <span class="math-container">$(\forall m_0\leq m' < m\; P(m'))$</span> which is just one lower than what we are trying to prove.</p>
<p>Had we tried to prove <span class="math-container">$\forall m\geq m_0\; P(m)$</span> instead as the right hand side of (\ref{1}), then our induction hypothesis would be along the lines of</p>
<blockquote>
<p>Suppose inductively that <span class="math-container">$P(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show <span class="math-container">$P(m++)$</span> is true.</p>
</blockquote>
<p>This inductive hypothesis gives us that <span class="math-container">$P('m)$</span> is true but we can't really use the original hypothesis <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> much since that would require us to show that <span class="math-container">$Q(m++)$</span> to infer (by modus ponens) that <span class="math-container">$P(m++)$</span> is true. For that we would need to know that <span class="math-container">$P(m')$</span> holds true for each natural number <span class="math-container">$m'$</span> greater than or equal to <span class="math-container">$m_0$</span> but less than <span class="math-container">$m++$</span>. Our inductive hypothesis simply doesn't give as much information as the one obtained from <span class="math-container">$\forall m\geq m_0 Q(m)$</span>.</p>
<h1>Tl;dr: the proof</h1>
<p>I do apologise for the length, I write in this way since many reading this book might be having their first exposure to some proofing techniques thus I feel it justified.</p>
<h3>Some what of a lemma,</h3>
<p>Now we aim to prove (\ref{2}), <span class="math-container">$(\Rightarrow)$</span> Suppose that <span class="math-container">$\forall m\geq m_0\;P(m)$</span>. Let <span class="math-container">$n$</span> be a arbitrary natural number such that <span class="math-container">$n\geq m_0$</span>, we aim to prove <span class="math-container">$Q(n)$</span> to be true which by definition amount to proving that <span class="math-container">$\forall m_0\leq m' < n\; P(m')$</span> is true, which follows trivially since by assumption <span class="math-container">$P(k)$</span> is true for all natural numbers <span class="math-container">$k\geq m_0$</span>, thus <span class="math-container">$Q(n)$</span> is true.</p>
<p>Suppose that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, and let <span class="math-container">$m$</span> be a arbitrary natural number such that <span class="math-container">$m\geq m_0$</span>. We aim to show that <span class="math-container">$P(m)$</span> is true, since <span class="math-container">$m\geq m_0$</span> and <span class="math-container">$m++ > m$</span> (this can be proven) then by transitivity <span class="math-container">$m++ > m_0$</span> thus by our hypothesis <span class="math-container">$Q(m++)$</span> and by definition of <span class="math-container">$Q(m++)$</span> we can infer that <span class="math-container">$P(m)$</span> is true.
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h3>The main proof.</h3>
<p>We we aim to prove <em>proposition 2.2.14</em>, we do so by proving (\ref{1}) using many of the facts we have already laid out.</p>
<p>Suppose <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span>,
that is for each <span class="math-container">$m\geq m_0$</span> that if <span class="math-container">$Q(m)$</span> is true then <span class="math-container">$P(m)$</span> is true. Now we employ mathematical induction in our goal of proving that <span class="math-container">$\forall n\geq m_0\; Q(n)$</span>, for the base case let <span class="math-container">$n=m_0$</span> we then aim to prove that <span class="math-container">$Q(m_0)$</span> is true. By definition we aim to show <span class="math-container">$\forall m_0\geq m' < m_0\; P(m')$</span>, by <em>proposition. 2.2.13</em> (Trichotomy of order for natural numbers) there can be no natural number such that either <span class="math-container">$m_0 < m'$</span> or <span class="math-container">$m_0 =m$</span> and at the same time <span class="math-container">$m'<m_0$</span> this this is vacuously true, thus <span class="math-container">$Q(m_0)$</span> is vacuously true.</p>
<p>Now suppose inductively that <span class="math-container">$Q(m)$</span> is true for a arbitrary natural number <span class="math-container">$m\geq m_0$</span>, we aim to show that <span class="math-container">$Q(m++)$</span> is true. Since <span class="math-container">$Q(m)$</span> holds then <span class="math-container">$P(m')$</span> for each natural number <span class="math-container">$m'$</span> less than or equal to <span class="math-container">$m_0$</span> and less than <span class="math-container">$m$</span>, and by our assumption <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right)$</span> we can infer that <span class="math-container">$P(m)$</span> is true. Thus <span class="math-container">$P(m')$</span> hold for each natural number <span class="math-container">$m'$</span> such that <span class="math-container">$m_0\geq m' \leq m$</span> or rather <span class="math-container">$m_0\geq m' < m++$</span> which is the definition of <span class="math-container">$Q(m++)$</span>, therefore we have closed the induction.</p>
<p>Since we have proven <span class="math-container">$\left(\forall m\geq m_0 \left[(Q(m) \Rightarrow P(m)\right]\right) \Rightarrow \forall n\geq m_0\; Q(n)$</span> and <span class="math-container">$\forall m\geq m_0 P(m) \iff \forall n\geq m_0 Q(n)$</span>, we have equivalently proven (\ref{*})
<span class="math-container">$$\tag*{$\blacksquare$}$$</span></p>
<h2>Conclusion.</h2>
<p>In essence this what a very long winded explanation for how I think the hint was supposed to be used (and why), along with the justification of (\ref{2}) that I felt was missing this was motivated by the aforementioned exercise in <em>"How to prove it: A structured approach."</em> (perhaps it is obvious but I wanted to keep the tone of the book).This is very much open to critique since I am very much a self taught novice and appreciate all feedback.</p>
Answer 24: <p>I think that a rewriting of the proposition will help in resolving the problem. What Tao actually meant was: </p>
<p><strong>Proposition $2.2.14$</strong> (Strong principle of induction). Let $m_0$ be
a natural number, and let $P(m)$ be a property pertaining to an
arbitrary natural number $m$ <strong>such that</strong> for each $m \geq m_0$, we
have the following implication: if $P(m')$ is true for all natural
numbers $m_0 \leq m' < m$, then $P(m)$ is also true. (In particular,
this means that $P(m_0)$ is true since in this case, the hypothesis is vacuous
.) Then we can conclude that $P(m)$ is true for all natural
numbers $m\geq m_0.$</p>
<p>Thus the fact that $P(n)$ holds is automatically implied by the definition of $P(n)$. What that means is that whenever we say $P(n)$ holds for some $n\geq m_0$ then it is equivalent to saying that $P(m)$ holds for all natural numbers $m_0\leq m<n$ and $P(n)$. </p>
<p>This is what I think, but maybe I am wrong. </p>
|
Fibonacci Game Invariant Sum | 2,104,154 | 5 | https://math.stackexchange.com/questions/2104154/fibonacci-game-invariant-sum | <p><strong>The Problem</strong></p>
<p>Suppose I start with two natural numbers, $F_1=x \pmod 7$ and $F_2=y \pmod 7$ so that not both $x$ and $y$ are $0$ (and of course, $x,y<7$). From there, I form the sequence $F_{n+1}=F_n+F_{n-1}\pmod 7$. </p>
<p>The interesting fact I noticed is that $\sum _1 ^{16} F_n=49$ always!</p>
<p>For example, if I started with, say $x=1$ and $y=4$, the first $16$ terms of my sequence would be $1,4,5,2,0,2,2,4,6,3,2,5,0,5,5,3$, and indeed, the sum is $49$.</p>
<p>Why is this? </p>
<p><strong>What I've Tried</strong></p>
<p>We note that if we expand the first $16$ terms generally, as functions of $x$ and $y$, that $F_a+F_{8+a}\equiv 0 \pmod 7$ for all $1\le a \le 8$ regardless of what $x$ and $y$ are. Since $F_a+F_{8+a}$ can only take on values of $0$ and $7$, it is enough to show that of the first $8$ numbers $F_1$ through $F_8$, exactly one has to be $0$ (the rest being nonzero would show that the total sum is always 7*7+0*1=49). Of course, we could brute force, but is there a nice way to see what's going on?</p>
| [
{
"answer_id": 2104201,
"score": 1,
"text": "<p>Because there only a limited number of possible pair values $\\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:\n$$\\begin{array}{c}\n0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\\\\n0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\\\\n0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\\\\n\\end{array}$$</p>\n\n<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>\n\n<p>So I don't know if that counts as a \"nice way\" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>\n"
},
{
"answer_id": 2106278,
"score": 1,
"text": "<p>Funnily enough this happens for all $p$ in the sequence <a href=\"https://oeis.org/A000057\" rel=\"nofollow noreferrer\">A000057</a>, this\nsequence consists of primes $p$ for which the Fibonacci entry point is\n$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,\nwhere $(F(n))_{n \\geq 0}$ is the Fibonacci sequence). Note that for all\n$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\\mathbb{F}\n_p[x]$ as is explained in section 4 of <a href=\"http://math.stanford.edu/~brianrl/notes/fibonacci.pdf\" rel=\"nofollow noreferrer\">these</a> notes.</p>\n\n<p><strong>The Conjecture</strong></p>\n\n<p>Take a prime number $p$ in A000057.\nThe first few of these primes are $$2,\\ 3,\\ 7,\\ 23,\\ 43,\\ 67,\\ 83,\\ 103,\\ 127,\n\\ 163,\\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\\mathbb{F}_p$ defined \nby $$F_{a,b}(0) = a,\\ F_{a,b}(1) = b \\text{ and } F_{a,b}(n) + F_{a,b}(n+1) = \nF_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\\{0, \\dots, p-1\\}$\nsuch that $r_{a,b}(n) \\equiv F_{a,b}(n) \\pmod{p}$. If we, for example, write down the\nfirst couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we \nget $$ 0,\\ 1,\\ 1,\\ 2,\\ 3,\\ 5,\\ 1,\\ 6,\\ 0,\\ 6,\\ 6,\\ 5,\\ 4,\\ 2,\\ 6,\\ 1,\\dots $$ if we calculate\nthe sum of these $16$ values we get $49$. We can try this a couple of more times\nfor different values of $p$ and come to the conjecture that if $a,b$ are not both zero\nwe find that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>\n\n<p><strong>The Proof</strong></p>\n\n<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all\nstarting values $a,b$ and all $n$. To prove this we use the theory of characteristic \npolynomials for recurrence relations to obtain a general formula for the value of \n$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)\n= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in\n$\\mathbb{F}_p[x]$ we have to go to the extension field $$\\mathbb{F}_{p^2}:=\n\\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)\n(x-z_2)$. By comparing coefficients we see that $z_1 \\cdot z_2 = -1$ and from the \ntheory of finite fields we know that the Frobenius map $x \\mapsto x^p$ sends\n$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given\nby $$F_{a,b}(n) = c_1 \\cdot z_1^n + c_2 \\cdot z_2^n,$$ where $c_1,c_2$ are \nconstants in $\\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We \ncan now calculate that</p>\n\n<p>\\begin{align*}\n\tF_{a,b}(n+p+1) &= c_1\\cdot z_1^{n+p+1} + c_2 \\cdot {z_2}^{n+p+1} \\\\\n\t&= c_1 \\cdot z_1^n \\cdot z_1 \\cdot z_1^p + c_2 \\cdot {z_2}^n \\cdot {z_2} \\cdot {z_2}^p \\\\\n\t&= c_1 \\cdot z_1^n \\cdot (z_1 \\cdot {z_2}) + c_2 \\cdot {z_2}^n \\cdot ({z_2} \\cdot z_1) \\\\\n\t&= - c_1 \\cdot z_1^n - c_2 \\cdot {{z_2}}^{n}\\\\\n\t&= -F_{a,b}(n).\n\\end{align*}</p>\n\n<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider\nthe sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that\n$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -\nz_2^n),$$ where $C \\in \\mathbb{F}_{p^2}$ depends on $a$. We want to know when\n$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct \nformula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory\nwe know that this happens if and only if $z_1^n \\in \\mathbb{F}_{p}$. Now using the fact\nthat $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \\overline{F\n(n)} \\cdot z_1 + \\overline{F(n-1)},$$ where $(F(n))_{n \\geq 0}$ is now the regular \nFibonacci sequence. So we see that this is an element of $\\mathbb{F}_{p}$ if and only if\n$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that \n$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$. \nThis means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$\nand $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$\nis $2(p+1)$. So we see that the pairs \n$$\n(F_{0,a}(0),F_{0,a}(1)),\\ (F_{0,a}(1),F_{0,a}(2)),\\ \\dots, \\ (F_{0,a}(2p+1),F_{0,a}(2p+2)) \n$$\nare all different. We also see that the sequences $$F_{0,1},\\ F_{0,2}, \\dots, \nF_{0,\\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains\nonly the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we\nnow count pairs we have found $$\\frac{p-1}{2}\\cdot 2(p+1) = p^2-1,$$ together with the \ntrivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all\nsequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted. \nSince the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the \nsequence such that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = \\sum_{n=0}^{2p+1} r_{0,c}(n),$$\nfor some $c$. We have seen that if $c \\neq 0$ indeed $0$ only appears two times in the \nsum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$\nif $F_{a,b}(n) \\neq 0$ and thus we see that the sum is equal to \n$$\\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>\n"
},
{
"answer_id": 2104410,
"score": 0,
"text": "<p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\\subset GL(2,\\mathbb{F}_7)$ where $A=\\begin{pmatrix}0&1\\\\1&1\\end{pmatrix}$. Note that $A^2 \\begin{pmatrix}F_n\\\\F_{n+1}\\end{pmatrix} = \\begin{pmatrix} F_{n+2}\\\\ F_{n+3}\\end{pmatrix}$ (mod $7$) for all $n\\in \\mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \\begin{pmatrix}1&1\\\\1&2\\end{pmatrix}$, $A^4 = \\begin{pmatrix} 2&3\\\\3&5\\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>\n\n<p>Let $G$ operate on $\\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\\neq (a,b)\\in\\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\\mathbb{F}_7^2| = 49 = 8\\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>\n\n<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>\n\n<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>\n\n<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>\n"
},
{
"answer_id": 2104201,
"score": 1,
"text": "<p>Because there only a limited number of possible pair values $\\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:\n$$\\begin{array}{c}\n0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\\\\n0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\\\\n0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\\\\n\\end{array}$$</p>\n\n<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>\n\n<p>So I don't know if that counts as a \"nice way\" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>\n"
},
{
"answer_id": 2106278,
"score": 1,
"text": "<p>Funnily enough this happens for all $p$ in the sequence <a href=\"https://oeis.org/A000057\" rel=\"nofollow noreferrer\">A000057</a>, this\nsequence consists of primes $p$ for which the Fibonacci entry point is\n$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,\nwhere $(F(n))_{n \\geq 0}$ is the Fibonacci sequence). Note that for all\n$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\\mathbb{F}\n_p[x]$ as is explained in section 4 of <a href=\"http://math.stanford.edu/~brianrl/notes/fibonacci.pdf\" rel=\"nofollow noreferrer\">these</a> notes.</p>\n\n<p><strong>The Conjecture</strong></p>\n\n<p>Take a prime number $p$ in A000057.\nThe first few of these primes are $$2,\\ 3,\\ 7,\\ 23,\\ 43,\\ 67,\\ 83,\\ 103,\\ 127,\n\\ 163,\\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\\mathbb{F}_p$ defined \nby $$F_{a,b}(0) = a,\\ F_{a,b}(1) = b \\text{ and } F_{a,b}(n) + F_{a,b}(n+1) = \nF_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\\{0, \\dots, p-1\\}$\nsuch that $r_{a,b}(n) \\equiv F_{a,b}(n) \\pmod{p}$. If we, for example, write down the\nfirst couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we \nget $$ 0,\\ 1,\\ 1,\\ 2,\\ 3,\\ 5,\\ 1,\\ 6,\\ 0,\\ 6,\\ 6,\\ 5,\\ 4,\\ 2,\\ 6,\\ 1,\\dots $$ if we calculate\nthe sum of these $16$ values we get $49$. We can try this a couple of more times\nfor different values of $p$ and come to the conjecture that if $a,b$ are not both zero\nwe find that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>\n\n<p><strong>The Proof</strong></p>\n\n<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all\nstarting values $a,b$ and all $n$. To prove this we use the theory of characteristic \npolynomials for recurrence relations to obtain a general formula for the value of \n$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)\n= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in\n$\\mathbb{F}_p[x]$ we have to go to the extension field $$\\mathbb{F}_{p^2}:=\n\\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)\n(x-z_2)$. By comparing coefficients we see that $z_1 \\cdot z_2 = -1$ and from the \ntheory of finite fields we know that the Frobenius map $x \\mapsto x^p$ sends\n$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given\nby $$F_{a,b}(n) = c_1 \\cdot z_1^n + c_2 \\cdot z_2^n,$$ where $c_1,c_2$ are \nconstants in $\\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We \ncan now calculate that</p>\n\n<p>\\begin{align*}\n\tF_{a,b}(n+p+1) &= c_1\\cdot z_1^{n+p+1} + c_2 \\cdot {z_2}^{n+p+1} \\\\\n\t&= c_1 \\cdot z_1^n \\cdot z_1 \\cdot z_1^p + c_2 \\cdot {z_2}^n \\cdot {z_2} \\cdot {z_2}^p \\\\\n\t&= c_1 \\cdot z_1^n \\cdot (z_1 \\cdot {z_2}) + c_2 \\cdot {z_2}^n \\cdot ({z_2} \\cdot z_1) \\\\\n\t&= - c_1 \\cdot z_1^n - c_2 \\cdot {{z_2}}^{n}\\\\\n\t&= -F_{a,b}(n).\n\\end{align*}</p>\n\n<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider\nthe sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that\n$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -\nz_2^n),$$ where $C \\in \\mathbb{F}_{p^2}$ depends on $a$. We want to know when\n$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct \nformula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory\nwe know that this happens if and only if $z_1^n \\in \\mathbb{F}_{p}$. Now using the fact\nthat $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \\overline{F\n(n)} \\cdot z_1 + \\overline{F(n-1)},$$ where $(F(n))_{n \\geq 0}$ is now the regular \nFibonacci sequence. So we see that this is an element of $\\mathbb{F}_{p}$ if and only if\n$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that \n$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$. \nThis means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$\nand $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$\nis $2(p+1)$. So we see that the pairs \n$$\n(F_{0,a}(0),F_{0,a}(1)),\\ (F_{0,a}(1),F_{0,a}(2)),\\ \\dots, \\ (F_{0,a}(2p+1),F_{0,a}(2p+2)) \n$$\nare all different. We also see that the sequences $$F_{0,1},\\ F_{0,2}, \\dots, \nF_{0,\\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains\nonly the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we\nnow count pairs we have found $$\\frac{p-1}{2}\\cdot 2(p+1) = p^2-1,$$ together with the \ntrivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all\nsequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted. \nSince the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the \nsequence such that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = \\sum_{n=0}^{2p+1} r_{0,c}(n),$$\nfor some $c$. We have seen that if $c \\neq 0$ indeed $0$ only appears two times in the \nsum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$\nif $F_{a,b}(n) \\neq 0$ and thus we see that the sum is equal to \n$$\\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>\n"
},
{
"answer_id": 2104410,
"score": 0,
"text": "<p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\\subset GL(2,\\mathbb{F}_7)$ where $A=\\begin{pmatrix}0&1\\\\1&1\\end{pmatrix}$. Note that $A^2 \\begin{pmatrix}F_n\\\\F_{n+1}\\end{pmatrix} = \\begin{pmatrix} F_{n+2}\\\\ F_{n+3}\\end{pmatrix}$ (mod $7$) for all $n\\in \\mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \\begin{pmatrix}1&1\\\\1&2\\end{pmatrix}$, $A^4 = \\begin{pmatrix} 2&3\\\\3&5\\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>\n\n<p>Let $G$ operate on $\\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\\neq (a,b)\\in\\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\\mathbb{F}_7^2| = 49 = 8\\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>\n\n<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>\n\n<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>\n\n<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>\n"
},
{
"answer_id": 2104201,
"score": 1,
"text": "<p>Because there only a limited number of possible pair values $\\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:\n$$\\begin{array}{c}\n0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\\\\n0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\\\\n0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\\\\n\\end{array}$$</p>\n\n<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>\n\n<p>So I don't know if that counts as a \"nice way\" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>\n"
},
{
"answer_id": 2106278,
"score": 1,
"text": "<p>Funnily enough this happens for all $p$ in the sequence <a href=\"https://oeis.org/A000057\" rel=\"nofollow noreferrer\">A000057</a>, this\nsequence consists of primes $p$ for which the Fibonacci entry point is\n$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,\nwhere $(F(n))_{n \\geq 0}$ is the Fibonacci sequence). Note that for all\n$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\\mathbb{F}\n_p[x]$ as is explained in section 4 of <a href=\"http://math.stanford.edu/~brianrl/notes/fibonacci.pdf\" rel=\"nofollow noreferrer\">these</a> notes.</p>\n\n<p><strong>The Conjecture</strong></p>\n\n<p>Take a prime number $p$ in A000057.\nThe first few of these primes are $$2,\\ 3,\\ 7,\\ 23,\\ 43,\\ 67,\\ 83,\\ 103,\\ 127,\n\\ 163,\\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\\mathbb{F}_p$ defined \nby $$F_{a,b}(0) = a,\\ F_{a,b}(1) = b \\text{ and } F_{a,b}(n) + F_{a,b}(n+1) = \nF_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\\{0, \\dots, p-1\\}$\nsuch that $r_{a,b}(n) \\equiv F_{a,b}(n) \\pmod{p}$. If we, for example, write down the\nfirst couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we \nget $$ 0,\\ 1,\\ 1,\\ 2,\\ 3,\\ 5,\\ 1,\\ 6,\\ 0,\\ 6,\\ 6,\\ 5,\\ 4,\\ 2,\\ 6,\\ 1,\\dots $$ if we calculate\nthe sum of these $16$ values we get $49$. We can try this a couple of more times\nfor different values of $p$ and come to the conjecture that if $a,b$ are not both zero\nwe find that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>\n\n<p><strong>The Proof</strong></p>\n\n<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all\nstarting values $a,b$ and all $n$. To prove this we use the theory of characteristic \npolynomials for recurrence relations to obtain a general formula for the value of \n$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)\n= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in\n$\\mathbb{F}_p[x]$ we have to go to the extension field $$\\mathbb{F}_{p^2}:=\n\\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)\n(x-z_2)$. By comparing coefficients we see that $z_1 \\cdot z_2 = -1$ and from the \ntheory of finite fields we know that the Frobenius map $x \\mapsto x^p$ sends\n$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given\nby $$F_{a,b}(n) = c_1 \\cdot z_1^n + c_2 \\cdot z_2^n,$$ where $c_1,c_2$ are \nconstants in $\\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We \ncan now calculate that</p>\n\n<p>\\begin{align*}\n\tF_{a,b}(n+p+1) &= c_1\\cdot z_1^{n+p+1} + c_2 \\cdot {z_2}^{n+p+1} \\\\\n\t&= c_1 \\cdot z_1^n \\cdot z_1 \\cdot z_1^p + c_2 \\cdot {z_2}^n \\cdot {z_2} \\cdot {z_2}^p \\\\\n\t&= c_1 \\cdot z_1^n \\cdot (z_1 \\cdot {z_2}) + c_2 \\cdot {z_2}^n \\cdot ({z_2} \\cdot z_1) \\\\\n\t&= - c_1 \\cdot z_1^n - c_2 \\cdot {{z_2}}^{n}\\\\\n\t&= -F_{a,b}(n).\n\\end{align*}</p>\n\n<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider\nthe sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that\n$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -\nz_2^n),$$ where $C \\in \\mathbb{F}_{p^2}$ depends on $a$. We want to know when\n$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct \nformula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory\nwe know that this happens if and only if $z_1^n \\in \\mathbb{F}_{p}$. Now using the fact\nthat $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \\overline{F\n(n)} \\cdot z_1 + \\overline{F(n-1)},$$ where $(F(n))_{n \\geq 0}$ is now the regular \nFibonacci sequence. So we see that this is an element of $\\mathbb{F}_{p}$ if and only if\n$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that \n$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$. \nThis means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$\nand $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$\nis $2(p+1)$. So we see that the pairs \n$$\n(F_{0,a}(0),F_{0,a}(1)),\\ (F_{0,a}(1),F_{0,a}(2)),\\ \\dots, \\ (F_{0,a}(2p+1),F_{0,a}(2p+2)) \n$$\nare all different. We also see that the sequences $$F_{0,1},\\ F_{0,2}, \\dots, \nF_{0,\\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains\nonly the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we\nnow count pairs we have found $$\\frac{p-1}{2}\\cdot 2(p+1) = p^2-1,$$ together with the \ntrivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all\nsequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted. \nSince the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the \nsequence such that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = \\sum_{n=0}^{2p+1} r_{0,c}(n),$$\nfor some $c$. We have seen that if $c \\neq 0$ indeed $0$ only appears two times in the \nsum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$\nif $F_{a,b}(n) \\neq 0$ and thus we see that the sum is equal to \n$$\\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>\n"
},
{
"answer_id": 2104410,
"score": 0,
"text": "<p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\\subset GL(2,\\mathbb{F}_7)$ where $A=\\begin{pmatrix}0&1\\\\1&1\\end{pmatrix}$. Note that $A^2 \\begin{pmatrix}F_n\\\\F_{n+1}\\end{pmatrix} = \\begin{pmatrix} F_{n+2}\\\\ F_{n+3}\\end{pmatrix}$ (mod $7$) for all $n\\in \\mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \\begin{pmatrix}1&1\\\\1&2\\end{pmatrix}$, $A^4 = \\begin{pmatrix} 2&3\\\\3&5\\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>\n\n<p>Let $G$ operate on $\\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\\neq (a,b)\\in\\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\\mathbb{F}_7^2| = 49 = 8\\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>\n\n<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>\n\n<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>\n\n<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>\n"
},
{
"answer_id": 2104201,
"score": 1,
"text": "<p>Because there only a limited number of possible pair values $\\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:\n$$\\begin{array}{c}\n0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\\\\n0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\\\\n0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\\\\n\\end{array}$$</p>\n\n<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>\n\n<p>So I don't know if that counts as a \"nice way\" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>\n"
},
{
"answer_id": 2106278,
"score": 1,
"text": "<p>Funnily enough this happens for all $p$ in the sequence <a href=\"https://oeis.org/A000057\" rel=\"nofollow noreferrer\">A000057</a>, this\nsequence consists of primes $p$ for which the Fibonacci entry point is\n$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,\nwhere $(F(n))_{n \\geq 0}$ is the Fibonacci sequence). Note that for all\n$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\\mathbb{F}\n_p[x]$ as is explained in section 4 of <a href=\"http://math.stanford.edu/~brianrl/notes/fibonacci.pdf\" rel=\"nofollow noreferrer\">these</a> notes.</p>\n\n<p><strong>The Conjecture</strong></p>\n\n<p>Take a prime number $p$ in A000057.\nThe first few of these primes are $$2,\\ 3,\\ 7,\\ 23,\\ 43,\\ 67,\\ 83,\\ 103,\\ 127,\n\\ 163,\\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\\mathbb{F}_p$ defined \nby $$F_{a,b}(0) = a,\\ F_{a,b}(1) = b \\text{ and } F_{a,b}(n) + F_{a,b}(n+1) = \nF_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\\{0, \\dots, p-1\\}$\nsuch that $r_{a,b}(n) \\equiv F_{a,b}(n) \\pmod{p}$. If we, for example, write down the\nfirst couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we \nget $$ 0,\\ 1,\\ 1,\\ 2,\\ 3,\\ 5,\\ 1,\\ 6,\\ 0,\\ 6,\\ 6,\\ 5,\\ 4,\\ 2,\\ 6,\\ 1,\\dots $$ if we calculate\nthe sum of these $16$ values we get $49$. We can try this a couple of more times\nfor different values of $p$ and come to the conjecture that if $a,b$ are not both zero\nwe find that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>\n\n<p><strong>The Proof</strong></p>\n\n<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all\nstarting values $a,b$ and all $n$. To prove this we use the theory of characteristic \npolynomials for recurrence relations to obtain a general formula for the value of \n$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)\n= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in\n$\\mathbb{F}_p[x]$ we have to go to the extension field $$\\mathbb{F}_{p^2}:=\n\\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)\n(x-z_2)$. By comparing coefficients we see that $z_1 \\cdot z_2 = -1$ and from the \ntheory of finite fields we know that the Frobenius map $x \\mapsto x^p$ sends\n$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given\nby $$F_{a,b}(n) = c_1 \\cdot z_1^n + c_2 \\cdot z_2^n,$$ where $c_1,c_2$ are \nconstants in $\\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We \ncan now calculate that</p>\n\n<p>\\begin{align*}\n\tF_{a,b}(n+p+1) &= c_1\\cdot z_1^{n+p+1} + c_2 \\cdot {z_2}^{n+p+1} \\\\\n\t&= c_1 \\cdot z_1^n \\cdot z_1 \\cdot z_1^p + c_2 \\cdot {z_2}^n \\cdot {z_2} \\cdot {z_2}^p \\\\\n\t&= c_1 \\cdot z_1^n \\cdot (z_1 \\cdot {z_2}) + c_2 \\cdot {z_2}^n \\cdot ({z_2} \\cdot z_1) \\\\\n\t&= - c_1 \\cdot z_1^n - c_2 \\cdot {{z_2}}^{n}\\\\\n\t&= -F_{a,b}(n).\n\\end{align*}</p>\n\n<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider\nthe sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that\n$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -\nz_2^n),$$ where $C \\in \\mathbb{F}_{p^2}$ depends on $a$. We want to know when\n$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct \nformula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory\nwe know that this happens if and only if $z_1^n \\in \\mathbb{F}_{p}$. Now using the fact\nthat $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \\overline{F\n(n)} \\cdot z_1 + \\overline{F(n-1)},$$ where $(F(n))_{n \\geq 0}$ is now the regular \nFibonacci sequence. So we see that this is an element of $\\mathbb{F}_{p}$ if and only if\n$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that \n$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$. \nThis means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$\nand $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$\nis $2(p+1)$. So we see that the pairs \n$$\n(F_{0,a}(0),F_{0,a}(1)),\\ (F_{0,a}(1),F_{0,a}(2)),\\ \\dots, \\ (F_{0,a}(2p+1),F_{0,a}(2p+2)) \n$$\nare all different. We also see that the sequences $$F_{0,1},\\ F_{0,2}, \\dots, \nF_{0,\\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains\nonly the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we\nnow count pairs we have found $$\\frac{p-1}{2}\\cdot 2(p+1) = p^2-1,$$ together with the \ntrivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all\nsequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted. \nSince the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the \nsequence such that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = \\sum_{n=0}^{2p+1} r_{0,c}(n),$$\nfor some $c$. We have seen that if $c \\neq 0$ indeed $0$ only appears two times in the \nsum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$\nif $F_{a,b}(n) \\neq 0$ and thus we see that the sum is equal to \n$$\\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>\n"
},
{
"answer_id": 2104410,
"score": 0,
"text": "<p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\\subset GL(2,\\mathbb{F}_7)$ where $A=\\begin{pmatrix}0&1\\\\1&1\\end{pmatrix}$. Note that $A^2 \\begin{pmatrix}F_n\\\\F_{n+1}\\end{pmatrix} = \\begin{pmatrix} F_{n+2}\\\\ F_{n+3}\\end{pmatrix}$ (mod $7$) for all $n\\in \\mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \\begin{pmatrix}1&1\\\\1&2\\end{pmatrix}$, $A^4 = \\begin{pmatrix} 2&3\\\\3&5\\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>\n\n<p>Let $G$ operate on $\\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\\neq (a,b)\\in\\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\\mathbb{F}_7^2| = 49 = 8\\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>\n\n<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>\n\n<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>\n\n<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>\n"
},
{
"answer_id": 2104201,
"score": 1,
"text": "<p>Because there only a limited number of possible pair values $\\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:\n$$\\begin{array}{c}\n0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\\\\n0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\\\\n0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\\\\n\\end{array}$$</p>\n\n<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>\n\n<p>So I don't know if that counts as a \"nice way\" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>\n"
},
{
"answer_id": 2106278,
"score": 1,
"text": "<p>Funnily enough this happens for all $p$ in the sequence <a href=\"https://oeis.org/A000057\" rel=\"nofollow noreferrer\">A000057</a>, this\nsequence consists of primes $p$ for which the Fibonacci entry point is\n$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,\nwhere $(F(n))_{n \\geq 0}$ is the Fibonacci sequence). Note that for all\n$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\\mathbb{F}\n_p[x]$ as is explained in section 4 of <a href=\"http://math.stanford.edu/~brianrl/notes/fibonacci.pdf\" rel=\"nofollow noreferrer\">these</a> notes.</p>\n\n<p><strong>The Conjecture</strong></p>\n\n<p>Take a prime number $p$ in A000057.\nThe first few of these primes are $$2,\\ 3,\\ 7,\\ 23,\\ 43,\\ 67,\\ 83,\\ 103,\\ 127,\n\\ 163,\\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\\mathbb{F}_p$ defined \nby $$F_{a,b}(0) = a,\\ F_{a,b}(1) = b \\text{ and } F_{a,b}(n) + F_{a,b}(n+1) = \nF_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\\{0, \\dots, p-1\\}$\nsuch that $r_{a,b}(n) \\equiv F_{a,b}(n) \\pmod{p}$. If we, for example, write down the\nfirst couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we \nget $$ 0,\\ 1,\\ 1,\\ 2,\\ 3,\\ 5,\\ 1,\\ 6,\\ 0,\\ 6,\\ 6,\\ 5,\\ 4,\\ 2,\\ 6,\\ 1,\\dots $$ if we calculate\nthe sum of these $16$ values we get $49$. We can try this a couple of more times\nfor different values of $p$ and come to the conjecture that if $a,b$ are not both zero\nwe find that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>\n\n<p><strong>The Proof</strong></p>\n\n<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all\nstarting values $a,b$ and all $n$. To prove this we use the theory of characteristic \npolynomials for recurrence relations to obtain a general formula for the value of \n$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)\n= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in\n$\\mathbb{F}_p[x]$ we have to go to the extension field $$\\mathbb{F}_{p^2}:=\n\\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)\n(x-z_2)$. By comparing coefficients we see that $z_1 \\cdot z_2 = -1$ and from the \ntheory of finite fields we know that the Frobenius map $x \\mapsto x^p$ sends\n$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given\nby $$F_{a,b}(n) = c_1 \\cdot z_1^n + c_2 \\cdot z_2^n,$$ where $c_1,c_2$ are \nconstants in $\\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We \ncan now calculate that</p>\n\n<p>\\begin{align*}\n\tF_{a,b}(n+p+1) &= c_1\\cdot z_1^{n+p+1} + c_2 \\cdot {z_2}^{n+p+1} \\\\\n\t&= c_1 \\cdot z_1^n \\cdot z_1 \\cdot z_1^p + c_2 \\cdot {z_2}^n \\cdot {z_2} \\cdot {z_2}^p \\\\\n\t&= c_1 \\cdot z_1^n \\cdot (z_1 \\cdot {z_2}) + c_2 \\cdot {z_2}^n \\cdot ({z_2} \\cdot z_1) \\\\\n\t&= - c_1 \\cdot z_1^n - c_2 \\cdot {{z_2}}^{n}\\\\\n\t&= -F_{a,b}(n).\n\\end{align*}</p>\n\n<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider\nthe sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that\n$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -\nz_2^n),$$ where $C \\in \\mathbb{F}_{p^2}$ depends on $a$. We want to know when\n$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct \nformula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory\nwe know that this happens if and only if $z_1^n \\in \\mathbb{F}_{p}$. Now using the fact\nthat $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \\overline{F\n(n)} \\cdot z_1 + \\overline{F(n-1)},$$ where $(F(n))_{n \\geq 0}$ is now the regular \nFibonacci sequence. So we see that this is an element of $\\mathbb{F}_{p}$ if and only if\n$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that \n$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$. \nThis means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$\nand $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$\nis $2(p+1)$. So we see that the pairs \n$$\n(F_{0,a}(0),F_{0,a}(1)),\\ (F_{0,a}(1),F_{0,a}(2)),\\ \\dots, \\ (F_{0,a}(2p+1),F_{0,a}(2p+2)) \n$$\nare all different. We also see that the sequences $$F_{0,1},\\ F_{0,2}, \\dots, \nF_{0,\\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains\nonly the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we\nnow count pairs we have found $$\\frac{p-1}{2}\\cdot 2(p+1) = p^2-1,$$ together with the \ntrivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all\nsequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted. \nSince the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the \nsequence such that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = \\sum_{n=0}^{2p+1} r_{0,c}(n),$$\nfor some $c$. We have seen that if $c \\neq 0$ indeed $0$ only appears two times in the \nsum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$\nif $F_{a,b}(n) \\neq 0$ and thus we see that the sum is equal to \n$$\\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>\n"
},
{
"answer_id": 2104410,
"score": 0,
"text": "<p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\\subset GL(2,\\mathbb{F}_7)$ where $A=\\begin{pmatrix}0&1\\\\1&1\\end{pmatrix}$. Note that $A^2 \\begin{pmatrix}F_n\\\\F_{n+1}\\end{pmatrix} = \\begin{pmatrix} F_{n+2}\\\\ F_{n+3}\\end{pmatrix}$ (mod $7$) for all $n\\in \\mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \\begin{pmatrix}1&1\\\\1&2\\end{pmatrix}$, $A^4 = \\begin{pmatrix} 2&3\\\\3&5\\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>\n\n<p>Let $G$ operate on $\\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\\neq (a,b)\\in\\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\\mathbb{F}_7^2| = 49 = 8\\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>\n\n<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>\n\n<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>\n\n<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>\n"
},
{
"answer_id": 2104201,
"score": 1,
"text": "<p>Because there only a limited number of possible pair values $\\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:\n$$\\begin{array}{c}\n0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\\\\n0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\\\\n0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\\\\n\\end{array}$$</p>\n\n<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>\n\n<p>So I don't know if that counts as a \"nice way\" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>\n"
},
{
"answer_id": 2106278,
"score": 1,
"text": "<p>Funnily enough this happens for all $p$ in the sequence <a href=\"https://oeis.org/A000057\" rel=\"nofollow noreferrer\">A000057</a>, this\nsequence consists of primes $p$ for which the Fibonacci entry point is\n$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,\nwhere $(F(n))_{n \\geq 0}$ is the Fibonacci sequence). Note that for all\n$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\\mathbb{F}\n_p[x]$ as is explained in section 4 of <a href=\"http://math.stanford.edu/~brianrl/notes/fibonacci.pdf\" rel=\"nofollow noreferrer\">these</a> notes.</p>\n\n<p><strong>The Conjecture</strong></p>\n\n<p>Take a prime number $p$ in A000057.\nThe first few of these primes are $$2,\\ 3,\\ 7,\\ 23,\\ 43,\\ 67,\\ 83,\\ 103,\\ 127,\n\\ 163,\\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\\mathbb{F}_p$ defined \nby $$F_{a,b}(0) = a,\\ F_{a,b}(1) = b \\text{ and } F_{a,b}(n) + F_{a,b}(n+1) = \nF_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\\{0, \\dots, p-1\\}$\nsuch that $r_{a,b}(n) \\equiv F_{a,b}(n) \\pmod{p}$. If we, for example, write down the\nfirst couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we \nget $$ 0,\\ 1,\\ 1,\\ 2,\\ 3,\\ 5,\\ 1,\\ 6,\\ 0,\\ 6,\\ 6,\\ 5,\\ 4,\\ 2,\\ 6,\\ 1,\\dots $$ if we calculate\nthe sum of these $16$ values we get $49$. We can try this a couple of more times\nfor different values of $p$ and come to the conjecture that if $a,b$ are not both zero\nwe find that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>\n\n<p><strong>The Proof</strong></p>\n\n<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all\nstarting values $a,b$ and all $n$. To prove this we use the theory of characteristic \npolynomials for recurrence relations to obtain a general formula for the value of \n$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)\n= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in\n$\\mathbb{F}_p[x]$ we have to go to the extension field $$\\mathbb{F}_{p^2}:=\n\\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)\n(x-z_2)$. By comparing coefficients we see that $z_1 \\cdot z_2 = -1$ and from the \ntheory of finite fields we know that the Frobenius map $x \\mapsto x^p$ sends\n$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given\nby $$F_{a,b}(n) = c_1 \\cdot z_1^n + c_2 \\cdot z_2^n,$$ where $c_1,c_2$ are \nconstants in $\\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We \ncan now calculate that</p>\n\n<p>\\begin{align*}\n\tF_{a,b}(n+p+1) &= c_1\\cdot z_1^{n+p+1} + c_2 \\cdot {z_2}^{n+p+1} \\\\\n\t&= c_1 \\cdot z_1^n \\cdot z_1 \\cdot z_1^p + c_2 \\cdot {z_2}^n \\cdot {z_2} \\cdot {z_2}^p \\\\\n\t&= c_1 \\cdot z_1^n \\cdot (z_1 \\cdot {z_2}) + c_2 \\cdot {z_2}^n \\cdot ({z_2} \\cdot z_1) \\\\\n\t&= - c_1 \\cdot z_1^n - c_2 \\cdot {{z_2}}^{n}\\\\\n\t&= -F_{a,b}(n).\n\\end{align*}</p>\n\n<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider\nthe sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that\n$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -\nz_2^n),$$ where $C \\in \\mathbb{F}_{p^2}$ depends on $a$. We want to know when\n$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct \nformula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory\nwe know that this happens if and only if $z_1^n \\in \\mathbb{F}_{p}$. Now using the fact\nthat $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \\overline{F\n(n)} \\cdot z_1 + \\overline{F(n-1)},$$ where $(F(n))_{n \\geq 0}$ is now the regular \nFibonacci sequence. So we see that this is an element of $\\mathbb{F}_{p}$ if and only if\n$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that \n$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$. \nThis means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$\nand $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$\nis $2(p+1)$. So we see that the pairs \n$$\n(F_{0,a}(0),F_{0,a}(1)),\\ (F_{0,a}(1),F_{0,a}(2)),\\ \\dots, \\ (F_{0,a}(2p+1),F_{0,a}(2p+2)) \n$$\nare all different. We also see that the sequences $$F_{0,1},\\ F_{0,2}, \\dots, \nF_{0,\\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains\nonly the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we\nnow count pairs we have found $$\\frac{p-1}{2}\\cdot 2(p+1) = p^2-1,$$ together with the \ntrivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all\nsequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted. \nSince the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the \nsequence such that $$\\sum_{n=0}^{2p+1} r_{a,b}(n) = \\sum_{n=0}^{2p+1} r_{0,c}(n),$$\nfor some $c$. We have seen that if $c \\neq 0$ indeed $0$ only appears two times in the \nsum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$\nif $F_{a,b}(n) \\neq 0$ and thus we see that the sum is equal to \n$$\\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>\n"
},
{
"answer_id": 2104410,
"score": 0,
"text": "<p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\\subset GL(2,\\mathbb{F}_7)$ where $A=\\begin{pmatrix}0&1\\\\1&1\\end{pmatrix}$. Note that $A^2 \\begin{pmatrix}F_n\\\\F_{n+1}\\end{pmatrix} = \\begin{pmatrix} F_{n+2}\\\\ F_{n+3}\\end{pmatrix}$ (mod $7$) for all $n\\in \\mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \\begin{pmatrix}1&1\\\\1&2\\end{pmatrix}$, $A^4 = \\begin{pmatrix} 2&3\\\\3&5\\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>\n\n<p>Let $G$ operate on $\\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\\neq (a,b)\\in\\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\\mathbb{F}_7^2| = 49 = 8\\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>\n\n<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>\n\n<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>\n\n<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>\n"
}
] | Question: Fibonacci Game Invariant Sum
Answer 1: <p>Because there only a limited number of possible pair values $\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:
$$\begin{array}{c}
0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\
0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\
0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\
\end{array}$$</p>
<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>
<p>So I don't know if that counts as a "nice way" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>
Answer 2: <p>Funnily enough this happens for all $p$ in the sequence <a href="https://oeis.org/A000057" rel="nofollow noreferrer">A000057</a>, this
sequence consists of primes $p$ for which the Fibonacci entry point is
$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,
where $(F(n))_{n \geq 0}$ is the Fibonacci sequence). Note that for all
$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\mathbb{F}
_p[x]$ as is explained in section 4 of <a href="http://math.stanford.edu/~brianrl/notes/fibonacci.pdf" rel="nofollow noreferrer">these</a> notes.</p>
<p><strong>The Conjecture</strong></p>
<p>Take a prime number $p$ in A000057.
The first few of these primes are $$2,\ 3,\ 7,\ 23,\ 43,\ 67,\ 83,\ 103,\ 127,
\ 163,\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\mathbb{F}_p$ defined
by $$F_{a,b}(0) = a,\ F_{a,b}(1) = b \text{ and } F_{a,b}(n) + F_{a,b}(n+1) =
F_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\{0, \dots, p-1\}$
such that $r_{a,b}(n) \equiv F_{a,b}(n) \pmod{p}$. If we, for example, write down the
first couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we
get $$ 0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 1,\ 6,\ 0,\ 6,\ 6,\ 5,\ 4,\ 2,\ 6,\ 1,\dots $$ if we calculate
the sum of these $16$ values we get $49$. We can try this a couple of more times
for different values of $p$ and come to the conjecture that if $a,b$ are not both zero
we find that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>
<p><strong>The Proof</strong></p>
<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all
starting values $a,b$ and all $n$. To prove this we use the theory of characteristic
polynomials for recurrence relations to obtain a general formula for the value of
$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)
= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in
$\mathbb{F}_p[x]$ we have to go to the extension field $$\mathbb{F}_{p^2}:=
\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)
(x-z_2)$. By comparing coefficients we see that $z_1 \cdot z_2 = -1$ and from the
theory of finite fields we know that the Frobenius map $x \mapsto x^p$ sends
$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given
by $$F_{a,b}(n) = c_1 \cdot z_1^n + c_2 \cdot z_2^n,$$ where $c_1,c_2$ are
constants in $\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We
can now calculate that</p>
<p>\begin{align*}
F_{a,b}(n+p+1) &= c_1\cdot z_1^{n+p+1} + c_2 \cdot {z_2}^{n+p+1} \\
&= c_1 \cdot z_1^n \cdot z_1 \cdot z_1^p + c_2 \cdot {z_2}^n \cdot {z_2} \cdot {z_2}^p \\
&= c_1 \cdot z_1^n \cdot (z_1 \cdot {z_2}) + c_2 \cdot {z_2}^n \cdot ({z_2} \cdot z_1) \\
&= - c_1 \cdot z_1^n - c_2 \cdot {{z_2}}^{n}\\
&= -F_{a,b}(n).
\end{align*}</p>
<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider
the sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that
$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -
z_2^n),$$ where $C \in \mathbb{F}_{p^2}$ depends on $a$. We want to know when
$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct
formula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory
we know that this happens if and only if $z_1^n \in \mathbb{F}_{p}$. Now using the fact
that $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \overline{F
(n)} \cdot z_1 + \overline{F(n-1)},$$ where $(F(n))_{n \geq 0}$ is now the regular
Fibonacci sequence. So we see that this is an element of $\mathbb{F}_{p}$ if and only if
$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that
$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$.
This means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$
and $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$
is $2(p+1)$. So we see that the pairs
$$
(F_{0,a}(0),F_{0,a}(1)),\ (F_{0,a}(1),F_{0,a}(2)),\ \dots, \ (F_{0,a}(2p+1),F_{0,a}(2p+2))
$$
are all different. We also see that the sequences $$F_{0,1},\ F_{0,2}, \dots,
F_{0,\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains
only the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we
now count pairs we have found $$\frac{p-1}{2}\cdot 2(p+1) = p^2-1,$$ together with the
trivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all
sequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted.
Since the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the
sequence such that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = \sum_{n=0}^{2p+1} r_{0,c}(n),$$
for some $c$. We have seen that if $c \neq 0$ indeed $0$ only appears two times in the
sum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$
if $F_{a,b}(n) \neq 0$ and thus we see that the sum is equal to
$$\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>
Answer 3: <p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\subset GL(2,\mathbb{F}_7)$ where $A=\begin{pmatrix}0&1\\1&1\end{pmatrix}$. Note that $A^2 \begin{pmatrix}F_n\\F_{n+1}\end{pmatrix} = \begin{pmatrix} F_{n+2}\\ F_{n+3}\end{pmatrix}$ (mod $7$) for all $n\in \mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \begin{pmatrix}1&1\\1&2\end{pmatrix}$, $A^4 = \begin{pmatrix} 2&3\\3&5\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>
<p>Let $G$ operate on $\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\neq (a,b)\in\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\mathbb{F}_7^2| = 49 = 8\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>
<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>
<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>
<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>
Answer 4: <p>Because there only a limited number of possible pair values $\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:
$$\begin{array}{c}
0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\
0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\
0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\
\end{array}$$</p>
<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>
<p>So I don't know if that counts as a "nice way" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>
Answer 5: <p>Funnily enough this happens for all $p$ in the sequence <a href="https://oeis.org/A000057" rel="nofollow noreferrer">A000057</a>, this
sequence consists of primes $p$ for which the Fibonacci entry point is
$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,
where $(F(n))_{n \geq 0}$ is the Fibonacci sequence). Note that for all
$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\mathbb{F}
_p[x]$ as is explained in section 4 of <a href="http://math.stanford.edu/~brianrl/notes/fibonacci.pdf" rel="nofollow noreferrer">these</a> notes.</p>
<p><strong>The Conjecture</strong></p>
<p>Take a prime number $p$ in A000057.
The first few of these primes are $$2,\ 3,\ 7,\ 23,\ 43,\ 67,\ 83,\ 103,\ 127,
\ 163,\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\mathbb{F}_p$ defined
by $$F_{a,b}(0) = a,\ F_{a,b}(1) = b \text{ and } F_{a,b}(n) + F_{a,b}(n+1) =
F_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\{0, \dots, p-1\}$
such that $r_{a,b}(n) \equiv F_{a,b}(n) \pmod{p}$. If we, for example, write down the
first couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we
get $$ 0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 1,\ 6,\ 0,\ 6,\ 6,\ 5,\ 4,\ 2,\ 6,\ 1,\dots $$ if we calculate
the sum of these $16$ values we get $49$. We can try this a couple of more times
for different values of $p$ and come to the conjecture that if $a,b$ are not both zero
we find that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>
<p><strong>The Proof</strong></p>
<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all
starting values $a,b$ and all $n$. To prove this we use the theory of characteristic
polynomials for recurrence relations to obtain a general formula for the value of
$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)
= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in
$\mathbb{F}_p[x]$ we have to go to the extension field $$\mathbb{F}_{p^2}:=
\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)
(x-z_2)$. By comparing coefficients we see that $z_1 \cdot z_2 = -1$ and from the
theory of finite fields we know that the Frobenius map $x \mapsto x^p$ sends
$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given
by $$F_{a,b}(n) = c_1 \cdot z_1^n + c_2 \cdot z_2^n,$$ where $c_1,c_2$ are
constants in $\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We
can now calculate that</p>
<p>\begin{align*}
F_{a,b}(n+p+1) &= c_1\cdot z_1^{n+p+1} + c_2 \cdot {z_2}^{n+p+1} \\
&= c_1 \cdot z_1^n \cdot z_1 \cdot z_1^p + c_2 \cdot {z_2}^n \cdot {z_2} \cdot {z_2}^p \\
&= c_1 \cdot z_1^n \cdot (z_1 \cdot {z_2}) + c_2 \cdot {z_2}^n \cdot ({z_2} \cdot z_1) \\
&= - c_1 \cdot z_1^n - c_2 \cdot {{z_2}}^{n}\\
&= -F_{a,b}(n).
\end{align*}</p>
<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider
the sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that
$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -
z_2^n),$$ where $C \in \mathbb{F}_{p^2}$ depends on $a$. We want to know when
$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct
formula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory
we know that this happens if and only if $z_1^n \in \mathbb{F}_{p}$. Now using the fact
that $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \overline{F
(n)} \cdot z_1 + \overline{F(n-1)},$$ where $(F(n))_{n \geq 0}$ is now the regular
Fibonacci sequence. So we see that this is an element of $\mathbb{F}_{p}$ if and only if
$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that
$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$.
This means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$
and $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$
is $2(p+1)$. So we see that the pairs
$$
(F_{0,a}(0),F_{0,a}(1)),\ (F_{0,a}(1),F_{0,a}(2)),\ \dots, \ (F_{0,a}(2p+1),F_{0,a}(2p+2))
$$
are all different. We also see that the sequences $$F_{0,1},\ F_{0,2}, \dots,
F_{0,\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains
only the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we
now count pairs we have found $$\frac{p-1}{2}\cdot 2(p+1) = p^2-1,$$ together with the
trivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all
sequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted.
Since the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the
sequence such that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = \sum_{n=0}^{2p+1} r_{0,c}(n),$$
for some $c$. We have seen that if $c \neq 0$ indeed $0$ only appears two times in the
sum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$
if $F_{a,b}(n) \neq 0$ and thus we see that the sum is equal to
$$\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>
Answer 6: <p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\subset GL(2,\mathbb{F}_7)$ where $A=\begin{pmatrix}0&1\\1&1\end{pmatrix}$. Note that $A^2 \begin{pmatrix}F_n\\F_{n+1}\end{pmatrix} = \begin{pmatrix} F_{n+2}\\ F_{n+3}\end{pmatrix}$ (mod $7$) for all $n\in \mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \begin{pmatrix}1&1\\1&2\end{pmatrix}$, $A^4 = \begin{pmatrix} 2&3\\3&5\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>
<p>Let $G$ operate on $\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\neq (a,b)\in\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\mathbb{F}_7^2| = 49 = 8\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>
<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>
<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>
<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>
Answer 7: <p>Because there only a limited number of possible pair values $\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:
$$\begin{array}{c}
0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\
0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\
0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\
\end{array}$$</p>
<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>
<p>So I don't know if that counts as a "nice way" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>
Answer 8: <p>Funnily enough this happens for all $p$ in the sequence <a href="https://oeis.org/A000057" rel="nofollow noreferrer">A000057</a>, this
sequence consists of primes $p$ for which the Fibonacci entry point is
$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,
where $(F(n))_{n \geq 0}$ is the Fibonacci sequence). Note that for all
$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\mathbb{F}
_p[x]$ as is explained in section 4 of <a href="http://math.stanford.edu/~brianrl/notes/fibonacci.pdf" rel="nofollow noreferrer">these</a> notes.</p>
<p><strong>The Conjecture</strong></p>
<p>Take a prime number $p$ in A000057.
The first few of these primes are $$2,\ 3,\ 7,\ 23,\ 43,\ 67,\ 83,\ 103,\ 127,
\ 163,\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\mathbb{F}_p$ defined
by $$F_{a,b}(0) = a,\ F_{a,b}(1) = b \text{ and } F_{a,b}(n) + F_{a,b}(n+1) =
F_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\{0, \dots, p-1\}$
such that $r_{a,b}(n) \equiv F_{a,b}(n) \pmod{p}$. If we, for example, write down the
first couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we
get $$ 0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 1,\ 6,\ 0,\ 6,\ 6,\ 5,\ 4,\ 2,\ 6,\ 1,\dots $$ if we calculate
the sum of these $16$ values we get $49$. We can try this a couple of more times
for different values of $p$ and come to the conjecture that if $a,b$ are not both zero
we find that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>
<p><strong>The Proof</strong></p>
<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all
starting values $a,b$ and all $n$. To prove this we use the theory of characteristic
polynomials for recurrence relations to obtain a general formula for the value of
$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)
= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in
$\mathbb{F}_p[x]$ we have to go to the extension field $$\mathbb{F}_{p^2}:=
\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)
(x-z_2)$. By comparing coefficients we see that $z_1 \cdot z_2 = -1$ and from the
theory of finite fields we know that the Frobenius map $x \mapsto x^p$ sends
$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given
by $$F_{a,b}(n) = c_1 \cdot z_1^n + c_2 \cdot z_2^n,$$ where $c_1,c_2$ are
constants in $\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We
can now calculate that</p>
<p>\begin{align*}
F_{a,b}(n+p+1) &= c_1\cdot z_1^{n+p+1} + c_2 \cdot {z_2}^{n+p+1} \\
&= c_1 \cdot z_1^n \cdot z_1 \cdot z_1^p + c_2 \cdot {z_2}^n \cdot {z_2} \cdot {z_2}^p \\
&= c_1 \cdot z_1^n \cdot (z_1 \cdot {z_2}) + c_2 \cdot {z_2}^n \cdot ({z_2} \cdot z_1) \\
&= - c_1 \cdot z_1^n - c_2 \cdot {{z_2}}^{n}\\
&= -F_{a,b}(n).
\end{align*}</p>
<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider
the sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that
$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -
z_2^n),$$ where $C \in \mathbb{F}_{p^2}$ depends on $a$. We want to know when
$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct
formula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory
we know that this happens if and only if $z_1^n \in \mathbb{F}_{p}$. Now using the fact
that $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \overline{F
(n)} \cdot z_1 + \overline{F(n-1)},$$ where $(F(n))_{n \geq 0}$ is now the regular
Fibonacci sequence. So we see that this is an element of $\mathbb{F}_{p}$ if and only if
$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that
$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$.
This means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$
and $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$
is $2(p+1)$. So we see that the pairs
$$
(F_{0,a}(0),F_{0,a}(1)),\ (F_{0,a}(1),F_{0,a}(2)),\ \dots, \ (F_{0,a}(2p+1),F_{0,a}(2p+2))
$$
are all different. We also see that the sequences $$F_{0,1},\ F_{0,2}, \dots,
F_{0,\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains
only the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we
now count pairs we have found $$\frac{p-1}{2}\cdot 2(p+1) = p^2-1,$$ together with the
trivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all
sequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted.
Since the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the
sequence such that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = \sum_{n=0}^{2p+1} r_{0,c}(n),$$
for some $c$. We have seen that if $c \neq 0$ indeed $0$ only appears two times in the
sum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$
if $F_{a,b}(n) \neq 0$ and thus we see that the sum is equal to
$$\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>
Answer 9: <p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\subset GL(2,\mathbb{F}_7)$ where $A=\begin{pmatrix}0&1\\1&1\end{pmatrix}$. Note that $A^2 \begin{pmatrix}F_n\\F_{n+1}\end{pmatrix} = \begin{pmatrix} F_{n+2}\\ F_{n+3}\end{pmatrix}$ (mod $7$) for all $n\in \mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \begin{pmatrix}1&1\\1&2\end{pmatrix}$, $A^4 = \begin{pmatrix} 2&3\\3&5\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>
<p>Let $G$ operate on $\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\neq (a,b)\in\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\mathbb{F}_7^2| = 49 = 8\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>
<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>
<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>
<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>
Answer 10: <p>Because there only a limited number of possible pair values $\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:
$$\begin{array}{c}
0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\
0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\
0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\
\end{array}$$</p>
<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>
<p>So I don't know if that counts as a "nice way" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>
Answer 11: <p>Funnily enough this happens for all $p$ in the sequence <a href="https://oeis.org/A000057" rel="nofollow noreferrer">A000057</a>, this
sequence consists of primes $p$ for which the Fibonacci entry point is
$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,
where $(F(n))_{n \geq 0}$ is the Fibonacci sequence). Note that for all
$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\mathbb{F}
_p[x]$ as is explained in section 4 of <a href="http://math.stanford.edu/~brianrl/notes/fibonacci.pdf" rel="nofollow noreferrer">these</a> notes.</p>
<p><strong>The Conjecture</strong></p>
<p>Take a prime number $p$ in A000057.
The first few of these primes are $$2,\ 3,\ 7,\ 23,\ 43,\ 67,\ 83,\ 103,\ 127,
\ 163,\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\mathbb{F}_p$ defined
by $$F_{a,b}(0) = a,\ F_{a,b}(1) = b \text{ and } F_{a,b}(n) + F_{a,b}(n+1) =
F_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\{0, \dots, p-1\}$
such that $r_{a,b}(n) \equiv F_{a,b}(n) \pmod{p}$. If we, for example, write down the
first couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we
get $$ 0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 1,\ 6,\ 0,\ 6,\ 6,\ 5,\ 4,\ 2,\ 6,\ 1,\dots $$ if we calculate
the sum of these $16$ values we get $49$. We can try this a couple of more times
for different values of $p$ and come to the conjecture that if $a,b$ are not both zero
we find that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>
<p><strong>The Proof</strong></p>
<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all
starting values $a,b$ and all $n$. To prove this we use the theory of characteristic
polynomials for recurrence relations to obtain a general formula for the value of
$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)
= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in
$\mathbb{F}_p[x]$ we have to go to the extension field $$\mathbb{F}_{p^2}:=
\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)
(x-z_2)$. By comparing coefficients we see that $z_1 \cdot z_2 = -1$ and from the
theory of finite fields we know that the Frobenius map $x \mapsto x^p$ sends
$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given
by $$F_{a,b}(n) = c_1 \cdot z_1^n + c_2 \cdot z_2^n,$$ where $c_1,c_2$ are
constants in $\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We
can now calculate that</p>
<p>\begin{align*}
F_{a,b}(n+p+1) &= c_1\cdot z_1^{n+p+1} + c_2 \cdot {z_2}^{n+p+1} \\
&= c_1 \cdot z_1^n \cdot z_1 \cdot z_1^p + c_2 \cdot {z_2}^n \cdot {z_2} \cdot {z_2}^p \\
&= c_1 \cdot z_1^n \cdot (z_1 \cdot {z_2}) + c_2 \cdot {z_2}^n \cdot ({z_2} \cdot z_1) \\
&= - c_1 \cdot z_1^n - c_2 \cdot {{z_2}}^{n}\\
&= -F_{a,b}(n).
\end{align*}</p>
<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider
the sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that
$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -
z_2^n),$$ where $C \in \mathbb{F}_{p^2}$ depends on $a$. We want to know when
$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct
formula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory
we know that this happens if and only if $z_1^n \in \mathbb{F}_{p}$. Now using the fact
that $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \overline{F
(n)} \cdot z_1 + \overline{F(n-1)},$$ where $(F(n))_{n \geq 0}$ is now the regular
Fibonacci sequence. So we see that this is an element of $\mathbb{F}_{p}$ if and only if
$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that
$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$.
This means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$
and $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$
is $2(p+1)$. So we see that the pairs
$$
(F_{0,a}(0),F_{0,a}(1)),\ (F_{0,a}(1),F_{0,a}(2)),\ \dots, \ (F_{0,a}(2p+1),F_{0,a}(2p+2))
$$
are all different. We also see that the sequences $$F_{0,1},\ F_{0,2}, \dots,
F_{0,\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains
only the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we
now count pairs we have found $$\frac{p-1}{2}\cdot 2(p+1) = p^2-1,$$ together with the
trivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all
sequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted.
Since the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the
sequence such that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = \sum_{n=0}^{2p+1} r_{0,c}(n),$$
for some $c$. We have seen that if $c \neq 0$ indeed $0$ only appears two times in the
sum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$
if $F_{a,b}(n) \neq 0$ and thus we see that the sum is equal to
$$\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>
Answer 12: <p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\subset GL(2,\mathbb{F}_7)$ where $A=\begin{pmatrix}0&1\\1&1\end{pmatrix}$. Note that $A^2 \begin{pmatrix}F_n\\F_{n+1}\end{pmatrix} = \begin{pmatrix} F_{n+2}\\ F_{n+3}\end{pmatrix}$ (mod $7$) for all $n\in \mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \begin{pmatrix}1&1\\1&2\end{pmatrix}$, $A^4 = \begin{pmatrix} 2&3\\3&5\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>
<p>Let $G$ operate on $\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\neq (a,b)\in\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\mathbb{F}_7^2| = 49 = 8\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>
<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>
<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>
<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>
Answer 13: <p>Because there only a limited number of possible pair values $\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:
$$\begin{array}{c}
0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\
0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\
0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\
\end{array}$$</p>
<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>
<p>So I don't know if that counts as a "nice way" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>
Answer 14: <p>Funnily enough this happens for all $p$ in the sequence <a href="https://oeis.org/A000057" rel="nofollow noreferrer">A000057</a>, this
sequence consists of primes $p$ for which the Fibonacci entry point is
$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,
where $(F(n))_{n \geq 0}$ is the Fibonacci sequence). Note that for all
$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\mathbb{F}
_p[x]$ as is explained in section 4 of <a href="http://math.stanford.edu/~brianrl/notes/fibonacci.pdf" rel="nofollow noreferrer">these</a> notes.</p>
<p><strong>The Conjecture</strong></p>
<p>Take a prime number $p$ in A000057.
The first few of these primes are $$2,\ 3,\ 7,\ 23,\ 43,\ 67,\ 83,\ 103,\ 127,
\ 163,\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\mathbb{F}_p$ defined
by $$F_{a,b}(0) = a,\ F_{a,b}(1) = b \text{ and } F_{a,b}(n) + F_{a,b}(n+1) =
F_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\{0, \dots, p-1\}$
such that $r_{a,b}(n) \equiv F_{a,b}(n) \pmod{p}$. If we, for example, write down the
first couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we
get $$ 0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 1,\ 6,\ 0,\ 6,\ 6,\ 5,\ 4,\ 2,\ 6,\ 1,\dots $$ if we calculate
the sum of these $16$ values we get $49$. We can try this a couple of more times
for different values of $p$ and come to the conjecture that if $a,b$ are not both zero
we find that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>
<p><strong>The Proof</strong></p>
<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all
starting values $a,b$ and all $n$. To prove this we use the theory of characteristic
polynomials for recurrence relations to obtain a general formula for the value of
$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)
= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in
$\mathbb{F}_p[x]$ we have to go to the extension field $$\mathbb{F}_{p^2}:=
\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)
(x-z_2)$. By comparing coefficients we see that $z_1 \cdot z_2 = -1$ and from the
theory of finite fields we know that the Frobenius map $x \mapsto x^p$ sends
$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given
by $$F_{a,b}(n) = c_1 \cdot z_1^n + c_2 \cdot z_2^n,$$ where $c_1,c_2$ are
constants in $\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We
can now calculate that</p>
<p>\begin{align*}
F_{a,b}(n+p+1) &= c_1\cdot z_1^{n+p+1} + c_2 \cdot {z_2}^{n+p+1} \\
&= c_1 \cdot z_1^n \cdot z_1 \cdot z_1^p + c_2 \cdot {z_2}^n \cdot {z_2} \cdot {z_2}^p \\
&= c_1 \cdot z_1^n \cdot (z_1 \cdot {z_2}) + c_2 \cdot {z_2}^n \cdot ({z_2} \cdot z_1) \\
&= - c_1 \cdot z_1^n - c_2 \cdot {{z_2}}^{n}\\
&= -F_{a,b}(n).
\end{align*}</p>
<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider
the sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that
$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -
z_2^n),$$ where $C \in \mathbb{F}_{p^2}$ depends on $a$. We want to know when
$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct
formula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory
we know that this happens if and only if $z_1^n \in \mathbb{F}_{p}$. Now using the fact
that $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \overline{F
(n)} \cdot z_1 + \overline{F(n-1)},$$ where $(F(n))_{n \geq 0}$ is now the regular
Fibonacci sequence. So we see that this is an element of $\mathbb{F}_{p}$ if and only if
$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that
$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$.
This means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$
and $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$
is $2(p+1)$. So we see that the pairs
$$
(F_{0,a}(0),F_{0,a}(1)),\ (F_{0,a}(1),F_{0,a}(2)),\ \dots, \ (F_{0,a}(2p+1),F_{0,a}(2p+2))
$$
are all different. We also see that the sequences $$F_{0,1},\ F_{0,2}, \dots,
F_{0,\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains
only the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we
now count pairs we have found $$\frac{p-1}{2}\cdot 2(p+1) = p^2-1,$$ together with the
trivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all
sequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted.
Since the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the
sequence such that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = \sum_{n=0}^{2p+1} r_{0,c}(n),$$
for some $c$. We have seen that if $c \neq 0$ indeed $0$ only appears two times in the
sum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$
if $F_{a,b}(n) \neq 0$ and thus we see that the sum is equal to
$$\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>
Answer 15: <p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\subset GL(2,\mathbb{F}_7)$ where $A=\begin{pmatrix}0&1\\1&1\end{pmatrix}$. Note that $A^2 \begin{pmatrix}F_n\\F_{n+1}\end{pmatrix} = \begin{pmatrix} F_{n+2}\\ F_{n+3}\end{pmatrix}$ (mod $7$) for all $n\in \mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \begin{pmatrix}1&1\\1&2\end{pmatrix}$, $A^4 = \begin{pmatrix} 2&3\\3&5\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>
<p>Let $G$ operate on $\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\neq (a,b)\in\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\mathbb{F}_7^2| = 49 = 8\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>
<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>
<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>
<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>
Answer 16: <p>Because there only a limited number of possible pair values $\bmod 7$, and because given a certain pair you can work both forwards and backwards uniquely under Fibonacci rules, it is inevitable that you have cycles of values that return to their starting pair. I think there might be an element of small-number effect that the $48$ possible value pairs (since we exclude $0,0$) fall neatly into just $3$ cycles of length $16$, all including zeroes:
$$\begin{array}{c}
0 & 1 & 1 & 2 & 3 & 5 & 1 & 6 & 0 & 6 & 6 & 5 & 4 & 2 & 6 & 1 & (0) \\
0 & 2 & 2 & 4 & 6 & 3 & 2 & 5 & 0 & 5 & 5 & 3 & 1 & 4 & 5 & 2 & (0) \\
0 & 3 & 3 & 6 & 2 & 1 & 3 & 4 & 0 & 4 & 4 & 1 & 5 & 6 & 4 & 3 & (0) \\
\end{array}$$</p>
<p>So every starting pair can be found in these cycles, which exhibit your symmetric effect of having $a_i\equiv -a_{i+8}$ (which also perpetuates through the Fibonacci recurrence).</p>
<p>So I don't know if that counts as a "nice way" - I certainly think there is more structure to be understood - but this is an interesting area that I was also looking at across bases (although I hadn't yet looked at variant starting values).</p>
Answer 17: <p>Funnily enough this happens for all $p$ in the sequence <a href="https://oeis.org/A000057" rel="nofollow noreferrer">A000057</a>, this
sequence consists of primes $p$ for which the Fibonacci entry point is
$p+1$ (i.e. the first positive $n$ for which $p$ divides $F(n)$ is $p+1$,
where $(F(n))_{n \geq 0}$ is the Fibonacci sequence). Note that for all
$p$ in this list the polynomial $x^2 - x - 1$ is irreducible in $\mathbb{F}
_p[x]$ as is explained in section 4 of <a href="http://math.stanford.edu/~brianrl/notes/fibonacci.pdf" rel="nofollow noreferrer">these</a> notes.</p>
<p><strong>The Conjecture</strong></p>
<p>Take a prime number $p$ in A000057.
The first few of these primes are $$2,\ 3,\ 7,\ 23,\ 43,\ 67,\ 83,\ 103,\ 127,
\ 163,\dots.$$ Now let $F_{a,b}(n)$ be a sequence in $\mathbb{F}_p$ defined
by $$F_{a,b}(0) = a,\ F_{a,b}(1) = b \text{ and } F_{a,b}(n) + F_{a,b}(n+1) =
F_{a,b}(n+2).$$ We let $r_{a,b}(n)$ be the unique element of $\{0, \dots, p-1\}$
such that $r_{a,b}(n) \equiv F_{a,b}(n) \pmod{p}$. If we, for example, write down the
first couple of terms of the sequence $r_{0,1}(n)$ for $p = 7$ we
get $$ 0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 1,\ 6,\ 0,\ 6,\ 6,\ 5,\ 4,\ 2,\ 6,\ 1,\dots $$ if we calculate
the sum of these $16$ values we get $49$. We can try this a couple of more times
for different values of $p$ and come to the conjecture that if $a,b$ are not both zero
we find that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = p^2.$$</p>
<p><strong>The Proof</strong></p>
<p>First we observe that $F_{a,b}(n) + F_{a,b}(n + p + 1)$ seems to be zero for all
starting values $a,b$ and all $n$. To prove this we use the theory of characteristic
polynomials for recurrence relations to obtain a general formula for the value of
$F_{a,b}(n)$. The characteristic polynomial for this relation is given by $f(x)
= x^2 - x - 1$. Since we have chosen $p$ such that $f(x)$ is irreducible in
$\mathbb{F}_p[x]$ we have to go to the extension field $$\mathbb{F}_{p^2}:=
\mathbb{F}_p[x]/(f(x))$$ to obtain the roots $z_1,z_2$ such that $f(x) = (x-z_1)
(x-z_2)$. By comparing coefficients we see that $z_1 \cdot z_2 = -1$ and from the
theory of finite fields we know that the Frobenius map $x \mapsto x^p$ sends
$z_1$ to $z_2$ and vice versa. Now the general term $F_{a,b}(n)$ is given
by $$F_{a,b}(n) = c_1 \cdot z_1^n + c_2 \cdot z_2^n,$$ where $c_1,c_2$ are
constants in $\mathbb{F}_{p^2}$ that depend on the starting values $a,b$. We
can now calculate that</p>
<p>\begin{align*}
F_{a,b}(n+p+1) &= c_1\cdot z_1^{n+p+1} + c_2 \cdot {z_2}^{n+p+1} \\
&= c_1 \cdot z_1^n \cdot z_1 \cdot z_1^p + c_2 \cdot {z_2}^n \cdot {z_2} \cdot {z_2}^p \\
&= c_1 \cdot z_1^n \cdot (z_1 \cdot {z_2}) + c_2 \cdot {z_2}^n \cdot ({z_2} \cdot z_1) \\
&= - c_1 \cdot z_1^n - c_2 \cdot {{z_2}}^{n}\\
&= -F_{a,b}(n).
\end{align*}</p>
<p>This tells us that indeed $F_{a,b}(n) + F_{a,b}(n + p + 1) = 0$. Let us now consider
the sequences of the form $F_{0,a}(n)$, such that the first value is zero. We see that
$c_1 = -c_2$ such that the general formula is of the form $$F_{0,a}(n) = C(z_1^n -
z_2^n),$$ where $C \in \mathbb{F}_{p^2}$ depends on $a$. We want to know when
$F_{0,a}(n) = 0$ for the first time with $n > 0$ for non-zero $a$. Looking at the direct
formula we see that $F_{0,a}(n) = 0$ if and only if $z_1^n = z_2^n$. From Galois theory
we know that this happens if and only if $z_1^n \in \mathbb{F}_{p}$. Now using the fact
that $z_1^2 = z_1 +1$ we can prove by induction that $$z_1^n = \overline{F
(n)} \cdot z_1 + \overline{F(n-1)},$$ where $(F(n))_{n \geq 0}$ is now the regular
Fibonacci sequence. So we see that this is an element of $\mathbb{F}_{p}$ if and only if
$F(n)$ is divisible by $p$. So we are looking for the first positive $n$ such that
$F(n)$ is divisible by $p$. We have chosen $p$ precisely such that this value is $p+1$.
This means that the period of $F_{0,a}$ is at least $p+1$, but since $F_{0,a}(1) = a$
and $F_{0,a}(p+2) = -a$ and these are not equal, we see that the period of $F_{0,a}$
is $2(p+1)$. So we see that the pairs
$$
(F_{0,a}(0),F_{0,a}(1)),\ (F_{0,a}(1),F_{0,a}(2)),\ \dots, \ (F_{0,a}(2p+1),F_{0,a}(2p+2))
$$
are all different. We also see that the sequences $$F_{0,1},\ F_{0,2}, \dots,
F_{0,\frac{p-1}{2}}$$ are all different since a sequence of the form $F_{0,a}$ contains
only the two tuples $(0,a)$ and $(0,-a)$ with $0$ as the first entry of the pair. So if we
now count pairs we have found $$\frac{p-1}{2}\cdot 2(p+1) = p^2-1,$$ together with the
trivial sequence $F_{0,0}$ we see that we have counted all $p^2$ pairs and thus all
sequences $F_{a,b}$ are actually of the form $F_{0,c}$ for some $c$ possibly shifted.
Since the period of $F_{a,b}$ is $2(p+1)$ the sum is constant under the shifting of the
sequence such that $$\sum_{n=0}^{2p+1} r_{a,b}(n) = \sum_{n=0}^{2p+1} r_{0,c}(n),$$
for some $c$. We have seen that if $c \neq 0$ indeed $0$ only appears two times in the
sum. Since $F_{a,b}(n+p+1)=-F_{a,b}(n)$ we see that $r_{a,b}(n+p+1)+ r_{a,b}(n) = p$
if $F_{a,b}(n) \neq 0$ and thus we see that the sum is equal to
$$\sum_{n=0}^{2p+1} r_{0,c}(n) = p^2.$$</p>
Answer 18: <p>One way to prove your simplification (nice job btw!) [Not sure if this is nice enough, but it should be a little more satisfying than brute force]: Consider the group $G:=<A^2>\subset GL(2,\mathbb{F}_7)$ where $A=\begin{pmatrix}0&1\\1&1\end{pmatrix}$. Note that $A^2 \begin{pmatrix}F_n\\F_{n+1}\end{pmatrix} = \begin{pmatrix} F_{n+2}\\ F_{n+3}\end{pmatrix}$ (mod $7$) for all $n\in \mathbb{N}$. Computing powers of $A$ mod $7$: $A^2= \begin{pmatrix}1&1\\1&2\end{pmatrix}$, $A^4 = \begin{pmatrix} 2&3\\3&5\end{pmatrix}$, $A^8 = -I$. Thus $A$ has order $16$ and $|G|=8$.</p>
<p>Let $G$ operate on $\mathbb{F}_7^2$ by multiplication mod $7$. It is easy to see that every element $(0,0)\neq (a,b)\in\mathbb{F}_7^2$ is only fixed by the identity element of $G$. So its orbit has $8$ elements, and since $|\mathbb{F}_7^2| = 49 = 8\cdot 6 + 1$, there are $6$ of these in total. Let's consider the case that some orbit contains an element of type $(0,a)$ or $(a,0)$. The Fibonacci sequences starting at these points mod $7$ look like this:</p>
<p>$0,a,a,2a,3a,5a,a,-a,0,-a,..$</p>
<p>$a,0,a,a,2a,3a,5a,a,-a,0,..$</p>
<p>where the dots indicate that the same sequence as before is repeated with just $a$ exchanged for $-a$. So the orbits (excluding the trivial one) of pairs in $\mathbb{F}_7^2$ ,which contain a zero in one of their pairs, contain exactly 2 zeroes (in two of their pairs). Since there are (excluding the pair $(0,0)$) exactly $6\cdot 2 = 12$ of these, each of the $6$ orbits contains exactly two of them, with the 'distance' (in the Fibonacci sequence) of one zero to the next being $8$ (following from the expansion of the sequence above). So any $8$ consecutive numbers in a Fibonacci sequence contain exactly one that is divisible by $7$.</p>
|
Prove divergence of sum $\ln(1+1/n)$ | 2,104,389 | 5 | https://math.stackexchange.com/questions/2104389/prove-divergence-of-sum-ln11-n | <blockquote>
<p>We have seen the harmonic series is a divergent series whose terms approach $0$. Show that $$\sum_{n = 1}^\infty \text{ln}\left(1 + \frac{1}{n}\right)$$ is another series with this property. </p>
</blockquote>
<p>Denote $a_n = \text{ln}\left(1 + \frac{1}{n}\right).$ Then, $$\lim_{n \to \infty}\text{ln}\left(1 + \frac{1}{n}\right) = \text{ln}\left(1 + \lim_{n \to \infty}\frac{1}{n}\right) = 0,$$ since $\text{ln}(x)$ is a continues function on its domain. However, I can't seem to prove that the series is divergent. I know the proof for the harmonic series (which is quite clever in my opinion), and I wondered whether this question required a similar approach?</p>
| [
{
"answer_id": 2104394,
"score": 9,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\ln\\frac21+\\ln\\frac32+\\ln\\frac43+\\cdots\\ln\\frac{n+1}n=\\ln\\frac{2\\cdot3\\cdot4\\cdot\\cdots(n+1)}{1\\cdot2\\cdot3\\cdot\\cdots n}.$$</p>\n"
},
{
"answer_id": 2104402,
"score": 4,
"text": "<p>For all $n\\in\\mathbb{N}^\\star$ :</p>\n\n<p>$$\\ln\\left(1+\\frac{1}{n}\\right)=\\ln\\left(\\frac{n+1}{n}\\right)=\\ln(n+1)-\\ln(n)$$</p>\n\n<p>so that :</p>\n\n<p>$$\\sum_{n=1}^N\\ln\\left(1+\\frac{1}{n}\\right)=\\ln(N+1)-\\ln(1)=\\ln(N+1)\\underset{N\\to\\infty}{\\longrightarrow}+\\infty$$</p>\n\n<p>and the divergence of the series $\\sum_{n\\ge1}\\ln\\left(1+\\frac{1}{n}\\right)$ is proved.</p>\n\n<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>\n\n<p>$$\\forall n\\in\\mathbb{N}^\\star,\\,\\ln\\left(1+\\frac{1}{n}\\right)\\le\\frac{1}{n}$$</p>\n"
},
{
"answer_id": 2104398,
"score": 1,
"text": "<p><strong>Hint:</strong></p>\n\n<p>This is a series with positive terms, and $\\;\\ln\\Bigl(1+\\dfrac1n\\Bigr)\\sim_\\infty\\dfrac1n$.</p>\n"
},
{
"answer_id": 2104394,
"score": 9,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\ln\\frac21+\\ln\\frac32+\\ln\\frac43+\\cdots\\ln\\frac{n+1}n=\\ln\\frac{2\\cdot3\\cdot4\\cdot\\cdots(n+1)}{1\\cdot2\\cdot3\\cdot\\cdots n}.$$</p>\n"
},
{
"answer_id": 2104402,
"score": 4,
"text": "<p>For all $n\\in\\mathbb{N}^\\star$ :</p>\n\n<p>$$\\ln\\left(1+\\frac{1}{n}\\right)=\\ln\\left(\\frac{n+1}{n}\\right)=\\ln(n+1)-\\ln(n)$$</p>\n\n<p>so that :</p>\n\n<p>$$\\sum_{n=1}^N\\ln\\left(1+\\frac{1}{n}\\right)=\\ln(N+1)-\\ln(1)=\\ln(N+1)\\underset{N\\to\\infty}{\\longrightarrow}+\\infty$$</p>\n\n<p>and the divergence of the series $\\sum_{n\\ge1}\\ln\\left(1+\\frac{1}{n}\\right)$ is proved.</p>\n\n<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>\n\n<p>$$\\forall n\\in\\mathbb{N}^\\star,\\,\\ln\\left(1+\\frac{1}{n}\\right)\\le\\frac{1}{n}$$</p>\n"
},
{
"answer_id": 2104398,
"score": 1,
"text": "<p><strong>Hint:</strong></p>\n\n<p>This is a series with positive terms, and $\\;\\ln\\Bigl(1+\\dfrac1n\\Bigr)\\sim_\\infty\\dfrac1n$.</p>\n"
},
{
"answer_id": 2104394,
"score": 9,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\ln\\frac21+\\ln\\frac32+\\ln\\frac43+\\cdots\\ln\\frac{n+1}n=\\ln\\frac{2\\cdot3\\cdot4\\cdot\\cdots(n+1)}{1\\cdot2\\cdot3\\cdot\\cdots n}.$$</p>\n"
},
{
"answer_id": 2104402,
"score": 4,
"text": "<p>For all $n\\in\\mathbb{N}^\\star$ :</p>\n\n<p>$$\\ln\\left(1+\\frac{1}{n}\\right)=\\ln\\left(\\frac{n+1}{n}\\right)=\\ln(n+1)-\\ln(n)$$</p>\n\n<p>so that :</p>\n\n<p>$$\\sum_{n=1}^N\\ln\\left(1+\\frac{1}{n}\\right)=\\ln(N+1)-\\ln(1)=\\ln(N+1)\\underset{N\\to\\infty}{\\longrightarrow}+\\infty$$</p>\n\n<p>and the divergence of the series $\\sum_{n\\ge1}\\ln\\left(1+\\frac{1}{n}\\right)$ is proved.</p>\n\n<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>\n\n<p>$$\\forall n\\in\\mathbb{N}^\\star,\\,\\ln\\left(1+\\frac{1}{n}\\right)\\le\\frac{1}{n}$$</p>\n"
},
{
"answer_id": 2104398,
"score": 1,
"text": "<p><strong>Hint:</strong></p>\n\n<p>This is a series with positive terms, and $\\;\\ln\\Bigl(1+\\dfrac1n\\Bigr)\\sim_\\infty\\dfrac1n$.</p>\n"
},
{
"answer_id": 2104394,
"score": 9,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\ln\\frac21+\\ln\\frac32+\\ln\\frac43+\\cdots\\ln\\frac{n+1}n=\\ln\\frac{2\\cdot3\\cdot4\\cdot\\cdots(n+1)}{1\\cdot2\\cdot3\\cdot\\cdots n}.$$</p>\n"
},
{
"answer_id": 2104402,
"score": 4,
"text": "<p>For all $n\\in\\mathbb{N}^\\star$ :</p>\n\n<p>$$\\ln\\left(1+\\frac{1}{n}\\right)=\\ln\\left(\\frac{n+1}{n}\\right)=\\ln(n+1)-\\ln(n)$$</p>\n\n<p>so that :</p>\n\n<p>$$\\sum_{n=1}^N\\ln\\left(1+\\frac{1}{n}\\right)=\\ln(N+1)-\\ln(1)=\\ln(N+1)\\underset{N\\to\\infty}{\\longrightarrow}+\\infty$$</p>\n\n<p>and the divergence of the series $\\sum_{n\\ge1}\\ln\\left(1+\\frac{1}{n}\\right)$ is proved.</p>\n\n<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>\n\n<p>$$\\forall n\\in\\mathbb{N}^\\star,\\,\\ln\\left(1+\\frac{1}{n}\\right)\\le\\frac{1}{n}$$</p>\n"
},
{
"answer_id": 2104398,
"score": 1,
"text": "<p><strong>Hint:</strong></p>\n\n<p>This is a series with positive terms, and $\\;\\ln\\Bigl(1+\\dfrac1n\\Bigr)\\sim_\\infty\\dfrac1n$.</p>\n"
},
{
"answer_id": 2104394,
"score": 9,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\ln\\frac21+\\ln\\frac32+\\ln\\frac43+\\cdots\\ln\\frac{n+1}n=\\ln\\frac{2\\cdot3\\cdot4\\cdot\\cdots(n+1)}{1\\cdot2\\cdot3\\cdot\\cdots n}.$$</p>\n"
},
{
"answer_id": 2104402,
"score": 4,
"text": "<p>For all $n\\in\\mathbb{N}^\\star$ :</p>\n\n<p>$$\\ln\\left(1+\\frac{1}{n}\\right)=\\ln\\left(\\frac{n+1}{n}\\right)=\\ln(n+1)-\\ln(n)$$</p>\n\n<p>so that :</p>\n\n<p>$$\\sum_{n=1}^N\\ln\\left(1+\\frac{1}{n}\\right)=\\ln(N+1)-\\ln(1)=\\ln(N+1)\\underset{N\\to\\infty}{\\longrightarrow}+\\infty$$</p>\n\n<p>and the divergence of the series $\\sum_{n\\ge1}\\ln\\left(1+\\frac{1}{n}\\right)$ is proved.</p>\n\n<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>\n\n<p>$$\\forall n\\in\\mathbb{N}^\\star,\\,\\ln\\left(1+\\frac{1}{n}\\right)\\le\\frac{1}{n}$$</p>\n"
},
{
"answer_id": 2104398,
"score": 1,
"text": "<p><strong>Hint:</strong></p>\n\n<p>This is a series with positive terms, and $\\;\\ln\\Bigl(1+\\dfrac1n\\Bigr)\\sim_\\infty\\dfrac1n$.</p>\n"
},
{
"answer_id": 2104394,
"score": 9,
"text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\ln\\frac21+\\ln\\frac32+\\ln\\frac43+\\cdots\\ln\\frac{n+1}n=\\ln\\frac{2\\cdot3\\cdot4\\cdot\\cdots(n+1)}{1\\cdot2\\cdot3\\cdot\\cdots n}.$$</p>\n"
},
{
"answer_id": 2104402,
"score": 4,
"text": "<p>For all $n\\in\\mathbb{N}^\\star$ :</p>\n\n<p>$$\\ln\\left(1+\\frac{1}{n}\\right)=\\ln\\left(\\frac{n+1}{n}\\right)=\\ln(n+1)-\\ln(n)$$</p>\n\n<p>so that :</p>\n\n<p>$$\\sum_{n=1}^N\\ln\\left(1+\\frac{1}{n}\\right)=\\ln(N+1)-\\ln(1)=\\ln(N+1)\\underset{N\\to\\infty}{\\longrightarrow}+\\infty$$</p>\n\n<p>and the divergence of the series $\\sum_{n\\ge1}\\ln\\left(1+\\frac{1}{n}\\right)$ is proved.</p>\n\n<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>\n\n<p>$$\\forall n\\in\\mathbb{N}^\\star,\\,\\ln\\left(1+\\frac{1}{n}\\right)\\le\\frac{1}{n}$$</p>\n"
},
{
"answer_id": 2104398,
"score": 1,
"text": "<p><strong>Hint:</strong></p>\n\n<p>This is a series with positive terms, and $\\;\\ln\\Bigl(1+\\dfrac1n\\Bigr)\\sim_\\infty\\dfrac1n$.</p>\n"
}
] | Question: Prove divergence of sum $\ln(1+1/n)$
Answer 1: <p><strong>Hint</strong>:</p>
<p>$$\ln\frac21+\ln\frac32+\ln\frac43+\cdots\ln\frac{n+1}n=\ln\frac{2\cdot3\cdot4\cdot\cdots(n+1)}{1\cdot2\cdot3\cdot\cdots n}.$$</p>
Answer 2: <p>For all $n\in\mathbb{N}^\star$ :</p>
<p>$$\ln\left(1+\frac{1}{n}\right)=\ln\left(\frac{n+1}{n}\right)=\ln(n+1)-\ln(n)$$</p>
<p>so that :</p>
<p>$$\sum_{n=1}^N\ln\left(1+\frac{1}{n}\right)=\ln(N+1)-\ln(1)=\ln(N+1)\underset{N\to\infty}{\longrightarrow}+\infty$$</p>
<p>and the divergence of the series $\sum_{n\ge1}\ln\left(1+\frac{1}{n}\right)$ is proved.</p>
<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>
<p>$$\forall n\in\mathbb{N}^\star,\,\ln\left(1+\frac{1}{n}\right)\le\frac{1}{n}$$</p>
Answer 3: <p><strong>Hint:</strong></p>
<p>This is a series with positive terms, and $\;\ln\Bigl(1+\dfrac1n\Bigr)\sim_\infty\dfrac1n$.</p>
Answer 4: <p><strong>Hint</strong>:</p>
<p>$$\ln\frac21+\ln\frac32+\ln\frac43+\cdots\ln\frac{n+1}n=\ln\frac{2\cdot3\cdot4\cdot\cdots(n+1)}{1\cdot2\cdot3\cdot\cdots n}.$$</p>
Answer 5: <p>For all $n\in\mathbb{N}^\star$ :</p>
<p>$$\ln\left(1+\frac{1}{n}\right)=\ln\left(\frac{n+1}{n}\right)=\ln(n+1)-\ln(n)$$</p>
<p>so that :</p>
<p>$$\sum_{n=1}^N\ln\left(1+\frac{1}{n}\right)=\ln(N+1)-\ln(1)=\ln(N+1)\underset{N\to\infty}{\longrightarrow}+\infty$$</p>
<p>and the divergence of the series $\sum_{n\ge1}\ln\left(1+\frac{1}{n}\right)$ is proved.</p>
<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>
<p>$$\forall n\in\mathbb{N}^\star,\,\ln\left(1+\frac{1}{n}\right)\le\frac{1}{n}$$</p>
Answer 6: <p><strong>Hint:</strong></p>
<p>This is a series with positive terms, and $\;\ln\Bigl(1+\dfrac1n\Bigr)\sim_\infty\dfrac1n$.</p>
Answer 7: <p><strong>Hint</strong>:</p>
<p>$$\ln\frac21+\ln\frac32+\ln\frac43+\cdots\ln\frac{n+1}n=\ln\frac{2\cdot3\cdot4\cdot\cdots(n+1)}{1\cdot2\cdot3\cdot\cdots n}.$$</p>
Answer 8: <p>For all $n\in\mathbb{N}^\star$ :</p>
<p>$$\ln\left(1+\frac{1}{n}\right)=\ln\left(\frac{n+1}{n}\right)=\ln(n+1)-\ln(n)$$</p>
<p>so that :</p>
<p>$$\sum_{n=1}^N\ln\left(1+\frac{1}{n}\right)=\ln(N+1)-\ln(1)=\ln(N+1)\underset{N\to\infty}{\longrightarrow}+\infty$$</p>
<p>and the divergence of the series $\sum_{n\ge1}\ln\left(1+\frac{1}{n}\right)$ is proved.</p>
<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>
<p>$$\forall n\in\mathbb{N}^\star,\,\ln\left(1+\frac{1}{n}\right)\le\frac{1}{n}$$</p>
Answer 9: <p><strong>Hint:</strong></p>
<p>This is a series with positive terms, and $\;\ln\Bigl(1+\dfrac1n\Bigr)\sim_\infty\dfrac1n$.</p>
Answer 10: <p><strong>Hint</strong>:</p>
<p>$$\ln\frac21+\ln\frac32+\ln\frac43+\cdots\ln\frac{n+1}n=\ln\frac{2\cdot3\cdot4\cdot\cdots(n+1)}{1\cdot2\cdot3\cdot\cdots n}.$$</p>
Answer 11: <p>For all $n\in\mathbb{N}^\star$ :</p>
<p>$$\ln\left(1+\frac{1}{n}\right)=\ln\left(\frac{n+1}{n}\right)=\ln(n+1)-\ln(n)$$</p>
<p>so that :</p>
<p>$$\sum_{n=1}^N\ln\left(1+\frac{1}{n}\right)=\ln(N+1)-\ln(1)=\ln(N+1)\underset{N\to\infty}{\longrightarrow}+\infty$$</p>
<p>and the divergence of the series $\sum_{n\ge1}\ln\left(1+\frac{1}{n}\right)$ is proved.</p>
<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>
<p>$$\forall n\in\mathbb{N}^\star,\,\ln\left(1+\frac{1}{n}\right)\le\frac{1}{n}$$</p>
Answer 12: <p><strong>Hint:</strong></p>
<p>This is a series with positive terms, and $\;\ln\Bigl(1+\dfrac1n\Bigr)\sim_\infty\dfrac1n$.</p>
Answer 13: <p><strong>Hint</strong>:</p>
<p>$$\ln\frac21+\ln\frac32+\ln\frac43+\cdots\ln\frac{n+1}n=\ln\frac{2\cdot3\cdot4\cdot\cdots(n+1)}{1\cdot2\cdot3\cdot\cdots n}.$$</p>
Answer 14: <p>For all $n\in\mathbb{N}^\star$ :</p>
<p>$$\ln\left(1+\frac{1}{n}\right)=\ln\left(\frac{n+1}{n}\right)=\ln(n+1)-\ln(n)$$</p>
<p>so that :</p>
<p>$$\sum_{n=1}^N\ln\left(1+\frac{1}{n}\right)=\ln(N+1)-\ln(1)=\ln(N+1)\underset{N\to\infty}{\longrightarrow}+\infty$$</p>
<p>and the divergence of the series $\sum_{n\ge1}\ln\left(1+\frac{1}{n}\right)$ is proved.</p>
<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>
<p>$$\forall n\in\mathbb{N}^\star,\,\ln\left(1+\frac{1}{n}\right)\le\frac{1}{n}$$</p>
Answer 15: <p><strong>Hint:</strong></p>
<p>This is a series with positive terms, and $\;\ln\Bigl(1+\dfrac1n\Bigr)\sim_\infty\dfrac1n$.</p>
Answer 16: <p><strong>Hint</strong>:</p>
<p>$$\ln\frac21+\ln\frac32+\ln\frac43+\cdots\ln\frac{n+1}n=\ln\frac{2\cdot3\cdot4\cdot\cdots(n+1)}{1\cdot2\cdot3\cdot\cdots n}.$$</p>
Answer 17: <p>For all $n\in\mathbb{N}^\star$ :</p>
<p>$$\ln\left(1+\frac{1}{n}\right)=\ln\left(\frac{n+1}{n}\right)=\ln(n+1)-\ln(n)$$</p>
<p>so that :</p>
<p>$$\sum_{n=1}^N\ln\left(1+\frac{1}{n}\right)=\ln(N+1)-\ln(1)=\ln(N+1)\underset{N\to\infty}{\longrightarrow}+\infty$$</p>
<p>and the divergence of the series $\sum_{n\ge1}\ln\left(1+\frac{1}{n}\right)$ is proved.</p>
<p>Note that this gives us a proof (one of the easiest ones) of the divergence of the harmonic series, since :</p>
<p>$$\forall n\in\mathbb{N}^\star,\,\ln\left(1+\frac{1}{n}\right)\le\frac{1}{n}$$</p>
Answer 18: <p><strong>Hint:</strong></p>
<p>This is a series with positive terms, and $\;\ln\Bigl(1+\dfrac1n\Bigr)\sim_\infty\dfrac1n$.</p>
|
Find CDF of random variable which depends on other variable | 2,105,218 | 5 | https://math.stackexchange.com/questions/2105218/find-cdf-of-random-variable-which-depends-on-other-variable | <blockquote>
<p>Let $X$ be a random variable with uniform distribution on $[-1, 1]$. Find the CDF of random variable Y given by the following formula:</p>
<p>$Y = \left\{\begin{matrix}
-\frac{1}{2},& X < - \frac{1}{2}\\
X,& -\frac{1}{2} \leq X \leq \frac{1}{4}\\
\frac{1}{4}, & X > \frac{1}{4}
\end{matrix}\right.$</p>
</blockquote>
<p>So I've found PDF and CDF of $X$:</p>
<p>$f_X(x) = \begin{cases}
\frac{1}{2}, & x \in [-1, 1]\\
0, & \text{otherwise}
\end{cases}$</p>
<p>$F_X(a) = \int_{-\infty}^{a} f_X(x) dx = \left\{\begin{matrix}
0, & a \leq -1\\
\frac{a+1}{2}, & a \in (-1, 1) \\
1, & a \geq 1
\end{matrix}\right.$</p>
<p>I tried to find Y's CDF by:</p>
<p>$F_Y(a) = P(Y \leq a)$</p>
<p>$= P(-\frac{1}{2} \leq a, X < - \frac{1}{2}) + P(X \leq a, - \frac{1}{2} \leq X \leq \frac{1}{4}) + P(\frac{1}{4} \leq a, X > -\frac{1}{4})$</p>
<p>But what should I do next? I'm finding such CDF for the first time and my notes say I need to consider a few different cases, but I have no clue what they should look like and how to do it. Any tips would be helpful.</p>
| [
{
"answer_id": 2105406,
"score": 4,
"text": "<p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>\n\n<p>Notice that $Y$ will have a support of the interval $[-\\tfrac 12;\\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>\n\n<p>That is $~\\mathsf P(Y{=}-\\tfrac 12)~=~\\mathsf P(-1{\\leq}X{<}-\\tfrac 12)~$ and $~\\mathsf P(Y{=}\\tfrac 14)~=~\\mathsf P(\\tfrac 14{<}X{\\leq}1)~$.</p>\n\n<p>So then</p>\n\n<blockquote class=\"spoiler\">\n <p> $$F_Y(a)~=~\\mathsf P(Y\\leq a) ~=~ \\begin{cases} 0 &:& a<-\\tfrac 12\\\\ \\tfrac {2a+3}4 & : & -\\tfrac 12\\leq a < \\tfrac 14 \\\\ 1 &:& \\tfrac 14 \\leq a \\end{cases}$$</p>\n</blockquote>\n\n<p>If you've peeked, notice the step discontinuities.</p>\n"
},
{
"answer_id": 2105406,
"score": 4,
"text": "<p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>\n\n<p>Notice that $Y$ will have a support of the interval $[-\\tfrac 12;\\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>\n\n<p>That is $~\\mathsf P(Y{=}-\\tfrac 12)~=~\\mathsf P(-1{\\leq}X{<}-\\tfrac 12)~$ and $~\\mathsf P(Y{=}\\tfrac 14)~=~\\mathsf P(\\tfrac 14{<}X{\\leq}1)~$.</p>\n\n<p>So then</p>\n\n<blockquote class=\"spoiler\">\n <p> $$F_Y(a)~=~\\mathsf P(Y\\leq a) ~=~ \\begin{cases} 0 &:& a<-\\tfrac 12\\\\ \\tfrac {2a+3}4 & : & -\\tfrac 12\\leq a < \\tfrac 14 \\\\ 1 &:& \\tfrac 14 \\leq a \\end{cases}$$</p>\n</blockquote>\n\n<p>If you've peeked, notice the step discontinuities.</p>\n"
},
{
"answer_id": 2105406,
"score": 4,
"text": "<p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>\n\n<p>Notice that $Y$ will have a support of the interval $[-\\tfrac 12;\\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>\n\n<p>That is $~\\mathsf P(Y{=}-\\tfrac 12)~=~\\mathsf P(-1{\\leq}X{<}-\\tfrac 12)~$ and $~\\mathsf P(Y{=}\\tfrac 14)~=~\\mathsf P(\\tfrac 14{<}X{\\leq}1)~$.</p>\n\n<p>So then</p>\n\n<blockquote class=\"spoiler\">\n <p> $$F_Y(a)~=~\\mathsf P(Y\\leq a) ~=~ \\begin{cases} 0 &:& a<-\\tfrac 12\\\\ \\tfrac {2a+3}4 & : & -\\tfrac 12\\leq a < \\tfrac 14 \\\\ 1 &:& \\tfrac 14 \\leq a \\end{cases}$$</p>\n</blockquote>\n\n<p>If you've peeked, notice the step discontinuities.</p>\n"
},
{
"answer_id": 2105406,
"score": 4,
"text": "<p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>\n\n<p>Notice that $Y$ will have a support of the interval $[-\\tfrac 12;\\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>\n\n<p>That is $~\\mathsf P(Y{=}-\\tfrac 12)~=~\\mathsf P(-1{\\leq}X{<}-\\tfrac 12)~$ and $~\\mathsf P(Y{=}\\tfrac 14)~=~\\mathsf P(\\tfrac 14{<}X{\\leq}1)~$.</p>\n\n<p>So then</p>\n\n<blockquote class=\"spoiler\">\n <p> $$F_Y(a)~=~\\mathsf P(Y\\leq a) ~=~ \\begin{cases} 0 &:& a<-\\tfrac 12\\\\ \\tfrac {2a+3}4 & : & -\\tfrac 12\\leq a < \\tfrac 14 \\\\ 1 &:& \\tfrac 14 \\leq a \\end{cases}$$</p>\n</blockquote>\n\n<p>If you've peeked, notice the step discontinuities.</p>\n"
},
{
"answer_id": 2105406,
"score": 4,
"text": "<p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>\n\n<p>Notice that $Y$ will have a support of the interval $[-\\tfrac 12;\\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>\n\n<p>That is $~\\mathsf P(Y{=}-\\tfrac 12)~=~\\mathsf P(-1{\\leq}X{<}-\\tfrac 12)~$ and $~\\mathsf P(Y{=}\\tfrac 14)~=~\\mathsf P(\\tfrac 14{<}X{\\leq}1)~$.</p>\n\n<p>So then</p>\n\n<blockquote class=\"spoiler\">\n <p> $$F_Y(a)~=~\\mathsf P(Y\\leq a) ~=~ \\begin{cases} 0 &:& a<-\\tfrac 12\\\\ \\tfrac {2a+3}4 & : & -\\tfrac 12\\leq a < \\tfrac 14 \\\\ 1 &:& \\tfrac 14 \\leq a \\end{cases}$$</p>\n</blockquote>\n\n<p>If you've peeked, notice the step discontinuities.</p>\n"
},
{
"answer_id": 2105406,
"score": 4,
"text": "<p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>\n\n<p>Notice that $Y$ will have a support of the interval $[-\\tfrac 12;\\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>\n\n<p>That is $~\\mathsf P(Y{=}-\\tfrac 12)~=~\\mathsf P(-1{\\leq}X{<}-\\tfrac 12)~$ and $~\\mathsf P(Y{=}\\tfrac 14)~=~\\mathsf P(\\tfrac 14{<}X{\\leq}1)~$.</p>\n\n<p>So then</p>\n\n<blockquote class=\"spoiler\">\n <p> $$F_Y(a)~=~\\mathsf P(Y\\leq a) ~=~ \\begin{cases} 0 &:& a<-\\tfrac 12\\\\ \\tfrac {2a+3}4 & : & -\\tfrac 12\\leq a < \\tfrac 14 \\\\ 1 &:& \\tfrac 14 \\leq a \\end{cases}$$</p>\n</blockquote>\n\n<p>If you've peeked, notice the step discontinuities.</p>\n"
}
] | Question: Find CDF of random variable which depends on other variable
Answer 1: <p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>
<p>Notice that $Y$ will have a support of the interval $[-\tfrac 12;\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>
<p>That is $~\mathsf P(Y{=}-\tfrac 12)~=~\mathsf P(-1{\leq}X{<}-\tfrac 12)~$ and $~\mathsf P(Y{=}\tfrac 14)~=~\mathsf P(\tfrac 14{<}X{\leq}1)~$.</p>
<p>So then</p>
<blockquote class="spoiler">
<p> $$F_Y(a)~=~\mathsf P(Y\leq a) ~=~ \begin{cases} 0 &:& a<-\tfrac 12\\ \tfrac {2a+3}4 & : & -\tfrac 12\leq a < \tfrac 14 \\ 1 &:& \tfrac 14 \leq a \end{cases}$$</p>
</blockquote>
<p>If you've peeked, notice the step discontinuities.</p>
Answer 2: <p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>
<p>Notice that $Y$ will have a support of the interval $[-\tfrac 12;\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>
<p>That is $~\mathsf P(Y{=}-\tfrac 12)~=~\mathsf P(-1{\leq}X{<}-\tfrac 12)~$ and $~\mathsf P(Y{=}\tfrac 14)~=~\mathsf P(\tfrac 14{<}X{\leq}1)~$.</p>
<p>So then</p>
<blockquote class="spoiler">
<p> $$F_Y(a)~=~\mathsf P(Y\leq a) ~=~ \begin{cases} 0 &:& a<-\tfrac 12\\ \tfrac {2a+3}4 & : & -\tfrac 12\leq a < \tfrac 14 \\ 1 &:& \tfrac 14 \leq a \end{cases}$$</p>
</blockquote>
<p>If you've peeked, notice the step discontinuities.</p>
Answer 3: <p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>
<p>Notice that $Y$ will have a support of the interval $[-\tfrac 12;\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>
<p>That is $~\mathsf P(Y{=}-\tfrac 12)~=~\mathsf P(-1{\leq}X{<}-\tfrac 12)~$ and $~\mathsf P(Y{=}\tfrac 14)~=~\mathsf P(\tfrac 14{<}X{\leq}1)~$.</p>
<p>So then</p>
<blockquote class="spoiler">
<p> $$F_Y(a)~=~\mathsf P(Y\leq a) ~=~ \begin{cases} 0 &:& a<-\tfrac 12\\ \tfrac {2a+3}4 & : & -\tfrac 12\leq a < \tfrac 14 \\ 1 &:& \tfrac 14 \leq a \end{cases}$$</p>
</blockquote>
<p>If you've peeked, notice the step discontinuities.</p>
Answer 4: <p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>
<p>Notice that $Y$ will have a support of the interval $[-\tfrac 12;\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>
<p>That is $~\mathsf P(Y{=}-\tfrac 12)~=~\mathsf P(-1{\leq}X{<}-\tfrac 12)~$ and $~\mathsf P(Y{=}\tfrac 14)~=~\mathsf P(\tfrac 14{<}X{\leq}1)~$.</p>
<p>So then</p>
<blockquote class="spoiler">
<p> $$F_Y(a)~=~\mathsf P(Y\leq a) ~=~ \begin{cases} 0 &:& a<-\tfrac 12\\ \tfrac {2a+3}4 & : & -\tfrac 12\leq a < \tfrac 14 \\ 1 &:& \tfrac 14 \leq a \end{cases}$$</p>
</blockquote>
<p>If you've peeked, notice the step discontinuities.</p>
Answer 5: <p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>
<p>Notice that $Y$ will have a support of the interval $[-\tfrac 12;\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>
<p>That is $~\mathsf P(Y{=}-\tfrac 12)~=~\mathsf P(-1{\leq}X{<}-\tfrac 12)~$ and $~\mathsf P(Y{=}\tfrac 14)~=~\mathsf P(\tfrac 14{<}X{\leq}1)~$.</p>
<p>So then</p>
<blockquote class="spoiler">
<p> $$F_Y(a)~=~\mathsf P(Y\leq a) ~=~ \begin{cases} 0 &:& a<-\tfrac 12\\ \tfrac {2a+3}4 & : & -\tfrac 12\leq a < \tfrac 14 \\ 1 &:& \tfrac 14 \leq a \end{cases}$$</p>
</blockquote>
<p>If you've peeked, notice the step discontinuities.</p>
Answer 6: <p><em>Tip:</em> You can find a CDF, but not a pdf, for all points on the support.</p>
<p>Notice that $Y$ will have a support of the interval $[-\tfrac 12;\tfrac 14]$, <em>but</em> will have a probability <em>mass</em> at the two end points, with a probability <em>density</em> in the interval between. It is a mixed distribution.</p>
<p>That is $~\mathsf P(Y{=}-\tfrac 12)~=~\mathsf P(-1{\leq}X{<}-\tfrac 12)~$ and $~\mathsf P(Y{=}\tfrac 14)~=~\mathsf P(\tfrac 14{<}X{\leq}1)~$.</p>
<p>So then</p>
<blockquote class="spoiler">
<p> $$F_Y(a)~=~\mathsf P(Y\leq a) ~=~ \begin{cases} 0 &:& a<-\tfrac 12\\ \tfrac {2a+3}4 & : & -\tfrac 12\leq a < \tfrac 14 \\ 1 &:& \tfrac 14 \leq a \end{cases}$$</p>
</blockquote>
<p>If you've peeked, notice the step discontinuities.</p>
|
What happens to an absolute value graph if $|x|$ has a coefficient | 2,105,419 | 5 | https://math.stackexchange.com/questions/2105419/what-happens-to-an-absolute-value-graph-if-x-has-a-coefficient | <p>I skipped Algebra I in school, and we have Mathematics midterms next week. While going through our review packet, I noticed graphing absolute values, something I had never seen before.</p>
<p>I've figured out the basics: $|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$ translates the graph $d$ units along the y axis, and $-|x|$ flips the graph so it opens downward.</p>
<p>What happens, however, if we have $a|x|$, or $|ax|$? Is there an easy short hand way to draw this, or do I have to make a chart of the points and graph them one by one?</p>
| [
{
"answer_id": 2105422,
"score": 4,
"text": "<blockquote>\n <p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$\n translates the graph $d$ units along the y axis, and $-|x|$ flips the\n graph so it opens downward.</p>\n</blockquote>\n\n<p>$\\lvert x + n \\rvert$ is the graph of $\\lvert x \\rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.\nE.g. the tip will move from $x=0$ to $x=-n$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/jyVrR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/jyVrR.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/vWe8VrUH\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert x \\rvert + d$ is the graph of $\\lvert x \\rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/OO3T1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OO3T1.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Em7p4y83\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$-\\lvert x \\rvert$ is the graph of $\\lvert x \\rvert$ mirrored along the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/gIdPn.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/gIdPn.png\" alt=\"enter image description here\"></a></p>\n\n<blockquote>\n <p>What happens, however, if we have $a|x|$, or $|ax|$?</p>\n</blockquote>\n\n<p>$a\\lvert x \\rvert$ squeezes the graph of $\\lvert x \\rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.\nIf $a = 1$ nothing changes. If $a \\in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>\n\n<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CEHaW.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CEHaW.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/KXu4VFAW\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert a x \\rvert$ is just $\\lvert a\\rvert \\lvert x \\rvert$.\nThis leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Sbnex5Ub\" rel=\"nofollow noreferrer\">here</a>.</p>\n"
},
{
"answer_id": 2105424,
"score": 3,
"text": "<p>I just realized this, moments after posting.</p>\n\n<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>\n\n<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>\n\n<p>For example, if $a=1$:\n<img src=\"https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png\" alt=\"Graph of $y=1|x|$\">\nBut, if $a=2$,\n<img src=\"https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png\" alt=\"Graph of $y=2|x|$\"></p>\n"
},
{
"answer_id": 2105420,
"score": 2,
"text": "<p>For starters,</p>\n\n<p>$$|ax|=|a||x|=\\begin{cases}+|a|x;&x\\ge0\\\\-|a|x;&x<0\\end{cases}\\implies\\text{slope is }\\pm a$$</p>\n\n<p>which is just a taller or shorter $V$ shaped graph.</p>\n"
},
{
"answer_id": 2105422,
"score": 4,
"text": "<blockquote>\n <p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$\n translates the graph $d$ units along the y axis, and $-|x|$ flips the\n graph so it opens downward.</p>\n</blockquote>\n\n<p>$\\lvert x + n \\rvert$ is the graph of $\\lvert x \\rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.\nE.g. the tip will move from $x=0$ to $x=-n$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/jyVrR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/jyVrR.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/vWe8VrUH\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert x \\rvert + d$ is the graph of $\\lvert x \\rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/OO3T1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OO3T1.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Em7p4y83\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$-\\lvert x \\rvert$ is the graph of $\\lvert x \\rvert$ mirrored along the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/gIdPn.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/gIdPn.png\" alt=\"enter image description here\"></a></p>\n\n<blockquote>\n <p>What happens, however, if we have $a|x|$, or $|ax|$?</p>\n</blockquote>\n\n<p>$a\\lvert x \\rvert$ squeezes the graph of $\\lvert x \\rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.\nIf $a = 1$ nothing changes. If $a \\in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>\n\n<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CEHaW.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CEHaW.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/KXu4VFAW\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert a x \\rvert$ is just $\\lvert a\\rvert \\lvert x \\rvert$.\nThis leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Sbnex5Ub\" rel=\"nofollow noreferrer\">here</a>.</p>\n"
},
{
"answer_id": 2105424,
"score": 3,
"text": "<p>I just realized this, moments after posting.</p>\n\n<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>\n\n<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>\n\n<p>For example, if $a=1$:\n<img src=\"https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png\" alt=\"Graph of $y=1|x|$\">\nBut, if $a=2$,\n<img src=\"https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png\" alt=\"Graph of $y=2|x|$\"></p>\n"
},
{
"answer_id": 2105420,
"score": 2,
"text": "<p>For starters,</p>\n\n<p>$$|ax|=|a||x|=\\begin{cases}+|a|x;&x\\ge0\\\\-|a|x;&x<0\\end{cases}\\implies\\text{slope is }\\pm a$$</p>\n\n<p>which is just a taller or shorter $V$ shaped graph.</p>\n"
},
{
"answer_id": 2105422,
"score": 4,
"text": "<blockquote>\n <p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$\n translates the graph $d$ units along the y axis, and $-|x|$ flips the\n graph so it opens downward.</p>\n</blockquote>\n\n<p>$\\lvert x + n \\rvert$ is the graph of $\\lvert x \\rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.\nE.g. the tip will move from $x=0$ to $x=-n$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/jyVrR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/jyVrR.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/vWe8VrUH\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert x \\rvert + d$ is the graph of $\\lvert x \\rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/OO3T1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OO3T1.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Em7p4y83\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$-\\lvert x \\rvert$ is the graph of $\\lvert x \\rvert$ mirrored along the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/gIdPn.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/gIdPn.png\" alt=\"enter image description here\"></a></p>\n\n<blockquote>\n <p>What happens, however, if we have $a|x|$, or $|ax|$?</p>\n</blockquote>\n\n<p>$a\\lvert x \\rvert$ squeezes the graph of $\\lvert x \\rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.\nIf $a = 1$ nothing changes. If $a \\in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>\n\n<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CEHaW.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CEHaW.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/KXu4VFAW\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert a x \\rvert$ is just $\\lvert a\\rvert \\lvert x \\rvert$.\nThis leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Sbnex5Ub\" rel=\"nofollow noreferrer\">here</a>.</p>\n"
},
{
"answer_id": 2105424,
"score": 3,
"text": "<p>I just realized this, moments after posting.</p>\n\n<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>\n\n<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>\n\n<p>For example, if $a=1$:\n<img src=\"https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png\" alt=\"Graph of $y=1|x|$\">\nBut, if $a=2$,\n<img src=\"https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png\" alt=\"Graph of $y=2|x|$\"></p>\n"
},
{
"answer_id": 2105420,
"score": 2,
"text": "<p>For starters,</p>\n\n<p>$$|ax|=|a||x|=\\begin{cases}+|a|x;&x\\ge0\\\\-|a|x;&x<0\\end{cases}\\implies\\text{slope is }\\pm a$$</p>\n\n<p>which is just a taller or shorter $V$ shaped graph.</p>\n"
},
{
"answer_id": 2105422,
"score": 4,
"text": "<blockquote>\n <p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$\n translates the graph $d$ units along the y axis, and $-|x|$ flips the\n graph so it opens downward.</p>\n</blockquote>\n\n<p>$\\lvert x + n \\rvert$ is the graph of $\\lvert x \\rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.\nE.g. the tip will move from $x=0$ to $x=-n$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/jyVrR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/jyVrR.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/vWe8VrUH\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert x \\rvert + d$ is the graph of $\\lvert x \\rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/OO3T1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OO3T1.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Em7p4y83\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$-\\lvert x \\rvert$ is the graph of $\\lvert x \\rvert$ mirrored along the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/gIdPn.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/gIdPn.png\" alt=\"enter image description here\"></a></p>\n\n<blockquote>\n <p>What happens, however, if we have $a|x|$, or $|ax|$?</p>\n</blockquote>\n\n<p>$a\\lvert x \\rvert$ squeezes the graph of $\\lvert x \\rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.\nIf $a = 1$ nothing changes. If $a \\in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>\n\n<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CEHaW.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CEHaW.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/KXu4VFAW\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert a x \\rvert$ is just $\\lvert a\\rvert \\lvert x \\rvert$.\nThis leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Sbnex5Ub\" rel=\"nofollow noreferrer\">here</a>.</p>\n"
},
{
"answer_id": 2105424,
"score": 3,
"text": "<p>I just realized this, moments after posting.</p>\n\n<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>\n\n<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>\n\n<p>For example, if $a=1$:\n<img src=\"https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png\" alt=\"Graph of $y=1|x|$\">\nBut, if $a=2$,\n<img src=\"https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png\" alt=\"Graph of $y=2|x|$\"></p>\n"
},
{
"answer_id": 2105420,
"score": 2,
"text": "<p>For starters,</p>\n\n<p>$$|ax|=|a||x|=\\begin{cases}+|a|x;&x\\ge0\\\\-|a|x;&x<0\\end{cases}\\implies\\text{slope is }\\pm a$$</p>\n\n<p>which is just a taller or shorter $V$ shaped graph.</p>\n"
},
{
"answer_id": 2105422,
"score": 4,
"text": "<blockquote>\n <p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$\n translates the graph $d$ units along the y axis, and $-|x|$ flips the\n graph so it opens downward.</p>\n</blockquote>\n\n<p>$\\lvert x + n \\rvert$ is the graph of $\\lvert x \\rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.\nE.g. the tip will move from $x=0$ to $x=-n$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/jyVrR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/jyVrR.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/vWe8VrUH\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert x \\rvert + d$ is the graph of $\\lvert x \\rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/OO3T1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OO3T1.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Em7p4y83\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$-\\lvert x \\rvert$ is the graph of $\\lvert x \\rvert$ mirrored along the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/gIdPn.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/gIdPn.png\" alt=\"enter image description here\"></a></p>\n\n<blockquote>\n <p>What happens, however, if we have $a|x|$, or $|ax|$?</p>\n</blockquote>\n\n<p>$a\\lvert x \\rvert$ squeezes the graph of $\\lvert x \\rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.\nIf $a = 1$ nothing changes. If $a \\in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>\n\n<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CEHaW.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CEHaW.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/KXu4VFAW\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert a x \\rvert$ is just $\\lvert a\\rvert \\lvert x \\rvert$.\nThis leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Sbnex5Ub\" rel=\"nofollow noreferrer\">here</a>.</p>\n"
},
{
"answer_id": 2105424,
"score": 3,
"text": "<p>I just realized this, moments after posting.</p>\n\n<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>\n\n<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>\n\n<p>For example, if $a=1$:\n<img src=\"https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png\" alt=\"Graph of $y=1|x|$\">\nBut, if $a=2$,\n<img src=\"https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png\" alt=\"Graph of $y=2|x|$\"></p>\n"
},
{
"answer_id": 2105420,
"score": 2,
"text": "<p>For starters,</p>\n\n<p>$$|ax|=|a||x|=\\begin{cases}+|a|x;&x\\ge0\\\\-|a|x;&x<0\\end{cases}\\implies\\text{slope is }\\pm a$$</p>\n\n<p>which is just a taller or shorter $V$ shaped graph.</p>\n"
},
{
"answer_id": 2105422,
"score": 4,
"text": "<blockquote>\n <p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$\n translates the graph $d$ units along the y axis, and $-|x|$ flips the\n graph so it opens downward.</p>\n</blockquote>\n\n<p>$\\lvert x + n \\rvert$ is the graph of $\\lvert x \\rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.\nE.g. the tip will move from $x=0$ to $x=-n$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/jyVrR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/jyVrR.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/vWe8VrUH\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert x \\rvert + d$ is the graph of $\\lvert x \\rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/OO3T1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OO3T1.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Em7p4y83\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$-\\lvert x \\rvert$ is the graph of $\\lvert x \\rvert$ mirrored along the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/gIdPn.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/gIdPn.png\" alt=\"enter image description here\"></a></p>\n\n<blockquote>\n <p>What happens, however, if we have $a|x|$, or $|ax|$?</p>\n</blockquote>\n\n<p>$a\\lvert x \\rvert$ squeezes the graph of $\\lvert x \\rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.\nIf $a = 1$ nothing changes. If $a \\in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>\n\n<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CEHaW.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CEHaW.png\" alt=\"enter image description here\"></a></p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/KXu4VFAW\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>$\\lvert a x \\rvert$ is just $\\lvert a\\rvert \\lvert x \\rvert$.\nThis leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>\n\n<p>You can fiddle with it <a href=\"https://ggbm.at/Sbnex5Ub\" rel=\"nofollow noreferrer\">here</a>.</p>\n"
},
{
"answer_id": 2105424,
"score": 3,
"text": "<p>I just realized this, moments after posting.</p>\n\n<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>\n\n<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>\n\n<p>For example, if $a=1$:\n<img src=\"https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png\" alt=\"Graph of $y=1|x|$\">\nBut, if $a=2$,\n<img src=\"https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png\" alt=\"Graph of $y=2|x|$\"></p>\n"
},
{
"answer_id": 2105420,
"score": 2,
"text": "<p>For starters,</p>\n\n<p>$$|ax|=|a||x|=\\begin{cases}+|a|x;&x\\ge0\\\\-|a|x;&x<0\\end{cases}\\implies\\text{slope is }\\pm a$$</p>\n\n<p>which is just a taller or shorter $V$ shaped graph.</p>\n"
}
] | Question: What happens to an absolute value graph if $|x|$ has a coefficient
Answer 1: <blockquote>
<p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$
translates the graph $d$ units along the y axis, and $-|x|$ flips the
graph so it opens downward.</p>
</blockquote>
<p>$\lvert x + n \rvert$ is the graph of $\lvert x \rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.
E.g. the tip will move from $x=0$ to $x=-n$.</p>
<p><a href="https://i.stack.imgur.com/jyVrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jyVrR.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/vWe8VrUH" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert x \rvert + d$ is the graph of $\lvert x \rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>
<p><a href="https://i.stack.imgur.com/OO3T1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OO3T1.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/Em7p4y83" rel="nofollow noreferrer">here</a>.</p>
<p>$-\lvert x \rvert$ is the graph of $\lvert x \rvert$ mirrored along the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/gIdPn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIdPn.png" alt="enter image description here"></a></p>
<blockquote>
<p>What happens, however, if we have $a|x|$, or $|ax|$?</p>
</blockquote>
<p>$a\lvert x \rvert$ squeezes the graph of $\lvert x \rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.
If $a = 1$ nothing changes. If $a \in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>
<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/CEHaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEHaW.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/KXu4VFAW" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert a x \rvert$ is just $\lvert a\rvert \lvert x \rvert$.
This leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>
<p>You can fiddle with it <a href="https://ggbm.at/Sbnex5Ub" rel="nofollow noreferrer">here</a>.</p>
Answer 2: <p>I just realized this, moments after posting.</p>
<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>
<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>
<p>For example, if $a=1$:
<img src="https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png" alt="Graph of $y=1|x|$">
But, if $a=2$,
<img src="https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png" alt="Graph of $y=2|x|$"></p>
Answer 3: <p>For starters,</p>
<p>$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$</p>
<p>which is just a taller or shorter $V$ shaped graph.</p>
Answer 4: <blockquote>
<p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$
translates the graph $d$ units along the y axis, and $-|x|$ flips the
graph so it opens downward.</p>
</blockquote>
<p>$\lvert x + n \rvert$ is the graph of $\lvert x \rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.
E.g. the tip will move from $x=0$ to $x=-n$.</p>
<p><a href="https://i.stack.imgur.com/jyVrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jyVrR.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/vWe8VrUH" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert x \rvert + d$ is the graph of $\lvert x \rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>
<p><a href="https://i.stack.imgur.com/OO3T1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OO3T1.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/Em7p4y83" rel="nofollow noreferrer">here</a>.</p>
<p>$-\lvert x \rvert$ is the graph of $\lvert x \rvert$ mirrored along the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/gIdPn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIdPn.png" alt="enter image description here"></a></p>
<blockquote>
<p>What happens, however, if we have $a|x|$, or $|ax|$?</p>
</blockquote>
<p>$a\lvert x \rvert$ squeezes the graph of $\lvert x \rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.
If $a = 1$ nothing changes. If $a \in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>
<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/CEHaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEHaW.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/KXu4VFAW" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert a x \rvert$ is just $\lvert a\rvert \lvert x \rvert$.
This leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>
<p>You can fiddle with it <a href="https://ggbm.at/Sbnex5Ub" rel="nofollow noreferrer">here</a>.</p>
Answer 5: <p>I just realized this, moments after posting.</p>
<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>
<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>
<p>For example, if $a=1$:
<img src="https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png" alt="Graph of $y=1|x|$">
But, if $a=2$,
<img src="https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png" alt="Graph of $y=2|x|$"></p>
Answer 6: <p>For starters,</p>
<p>$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$</p>
<p>which is just a taller or shorter $V$ shaped graph.</p>
Answer 7: <blockquote>
<p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$
translates the graph $d$ units along the y axis, and $-|x|$ flips the
graph so it opens downward.</p>
</blockquote>
<p>$\lvert x + n \rvert$ is the graph of $\lvert x \rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.
E.g. the tip will move from $x=0$ to $x=-n$.</p>
<p><a href="https://i.stack.imgur.com/jyVrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jyVrR.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/vWe8VrUH" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert x \rvert + d$ is the graph of $\lvert x \rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>
<p><a href="https://i.stack.imgur.com/OO3T1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OO3T1.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/Em7p4y83" rel="nofollow noreferrer">here</a>.</p>
<p>$-\lvert x \rvert$ is the graph of $\lvert x \rvert$ mirrored along the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/gIdPn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIdPn.png" alt="enter image description here"></a></p>
<blockquote>
<p>What happens, however, if we have $a|x|$, or $|ax|$?</p>
</blockquote>
<p>$a\lvert x \rvert$ squeezes the graph of $\lvert x \rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.
If $a = 1$ nothing changes. If $a \in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>
<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/CEHaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEHaW.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/KXu4VFAW" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert a x \rvert$ is just $\lvert a\rvert \lvert x \rvert$.
This leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>
<p>You can fiddle with it <a href="https://ggbm.at/Sbnex5Ub" rel="nofollow noreferrer">here</a>.</p>
Answer 8: <p>I just realized this, moments after posting.</p>
<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>
<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>
<p>For example, if $a=1$:
<img src="https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png" alt="Graph of $y=1|x|$">
But, if $a=2$,
<img src="https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png" alt="Graph of $y=2|x|$"></p>
Answer 9: <p>For starters,</p>
<p>$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$</p>
<p>which is just a taller or shorter $V$ shaped graph.</p>
Answer 10: <blockquote>
<p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$
translates the graph $d$ units along the y axis, and $-|x|$ flips the
graph so it opens downward.</p>
</blockquote>
<p>$\lvert x + n \rvert$ is the graph of $\lvert x \rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.
E.g. the tip will move from $x=0$ to $x=-n$.</p>
<p><a href="https://i.stack.imgur.com/jyVrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jyVrR.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/vWe8VrUH" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert x \rvert + d$ is the graph of $\lvert x \rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>
<p><a href="https://i.stack.imgur.com/OO3T1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OO3T1.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/Em7p4y83" rel="nofollow noreferrer">here</a>.</p>
<p>$-\lvert x \rvert$ is the graph of $\lvert x \rvert$ mirrored along the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/gIdPn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIdPn.png" alt="enter image description here"></a></p>
<blockquote>
<p>What happens, however, if we have $a|x|$, or $|ax|$?</p>
</blockquote>
<p>$a\lvert x \rvert$ squeezes the graph of $\lvert x \rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.
If $a = 1$ nothing changes. If $a \in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>
<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/CEHaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEHaW.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/KXu4VFAW" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert a x \rvert$ is just $\lvert a\rvert \lvert x \rvert$.
This leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>
<p>You can fiddle with it <a href="https://ggbm.at/Sbnex5Ub" rel="nofollow noreferrer">here</a>.</p>
Answer 11: <p>I just realized this, moments after posting.</p>
<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>
<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>
<p>For example, if $a=1$:
<img src="https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png" alt="Graph of $y=1|x|$">
But, if $a=2$,
<img src="https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png" alt="Graph of $y=2|x|$"></p>
Answer 12: <p>For starters,</p>
<p>$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$</p>
<p>which is just a taller or shorter $V$ shaped graph.</p>
Answer 13: <blockquote>
<p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$
translates the graph $d$ units along the y axis, and $-|x|$ flips the
graph so it opens downward.</p>
</blockquote>
<p>$\lvert x + n \rvert$ is the graph of $\lvert x \rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.
E.g. the tip will move from $x=0$ to $x=-n$.</p>
<p><a href="https://i.stack.imgur.com/jyVrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jyVrR.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/vWe8VrUH" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert x \rvert + d$ is the graph of $\lvert x \rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>
<p><a href="https://i.stack.imgur.com/OO3T1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OO3T1.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/Em7p4y83" rel="nofollow noreferrer">here</a>.</p>
<p>$-\lvert x \rvert$ is the graph of $\lvert x \rvert$ mirrored along the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/gIdPn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIdPn.png" alt="enter image description here"></a></p>
<blockquote>
<p>What happens, however, if we have $a|x|$, or $|ax|$?</p>
</blockquote>
<p>$a\lvert x \rvert$ squeezes the graph of $\lvert x \rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.
If $a = 1$ nothing changes. If $a \in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>
<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/CEHaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEHaW.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/KXu4VFAW" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert a x \rvert$ is just $\lvert a\rvert \lvert x \rvert$.
This leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>
<p>You can fiddle with it <a href="https://ggbm.at/Sbnex5Ub" rel="nofollow noreferrer">here</a>.</p>
Answer 14: <p>I just realized this, moments after posting.</p>
<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>
<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>
<p>For example, if $a=1$:
<img src="https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png" alt="Graph of $y=1|x|$">
But, if $a=2$,
<img src="https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png" alt="Graph of $y=2|x|$"></p>
Answer 15: <p>For starters,</p>
<p>$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$</p>
<p>which is just a taller or shorter $V$ shaped graph.</p>
Answer 16: <blockquote>
<p>$|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$
translates the graph $d$ units along the y axis, and $-|x|$ flips the
graph so it opens downward.</p>
</blockquote>
<p>$\lvert x + n \rvert$ is the graph of $\lvert x \rvert$ translated $n$ units to the left if $n > 0$. If $n< 0$ it will translate to the right. $n=0$ changes nothing.
E.g. the tip will move from $x=0$ to $x=-n$.</p>
<p><a href="https://i.stack.imgur.com/jyVrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jyVrR.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/vWe8VrUH" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert x \rvert + d$ is the graph of $\lvert x \rvert$ translated $d$ units upwards if $d > 0$. If $d < 0$ it will get translated downwards. E.g. the the tip will move from $y=0$ to $y = d$.</p>
<p><a href="https://i.stack.imgur.com/OO3T1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OO3T1.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/Em7p4y83" rel="nofollow noreferrer">here</a>.</p>
<p>$-\lvert x \rvert$ is the graph of $\lvert x \rvert$ mirrored along the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/gIdPn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIdPn.png" alt="enter image description here"></a></p>
<blockquote>
<p>What happens, however, if we have $a|x|$, or $|ax|$?</p>
</blockquote>
<p>$a\lvert x \rvert$ squeezes the graph of $\lvert x \rvert$ horizontally with growing $a$, if $a>1$. E.g. $(1,1)$ will get mapped to $(1,a)$.
If $a = 1$ nothing changes. If $a \in (0,1)$ then the graph will widen horizontally. If $a = 0$ the graph will flatline to the constant zero function.</p>
<p>If $a$ is negative one will have an additional mirroring at the $x$-axis.</p>
<p><a href="https://i.stack.imgur.com/CEHaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEHaW.png" alt="enter image description here"></a></p>
<p>You can fiddle with it <a href="https://ggbm.at/KXu4VFAW" rel="nofollow noreferrer">here</a>.</p>
<p>$\lvert a x \rvert$ is just $\lvert a\rvert \lvert x \rvert$.
This leads to a different dynamics, as there are no negative factors anymore compared to the previous example.</p>
<p>You can fiddle with it <a href="https://ggbm.at/Sbnex5Ub" rel="nofollow noreferrer">here</a>.</p>
Answer 17: <p>I just realized this, moments after posting.</p>
<p>If $a$ exists in the context of the question, it affects the graph as $m$ would in a linear equation.</p>
<p>In $a = 1$ or $a = -1$, the slope on either side is $1$ or $-1$ respectively. The value of $a$ represents the slope on the right hand side of the line of symmetry and the opposite of the slope on the left hand side. (For $y=|ax|$, we can assume $y=|a||x|$, and simply find the abolsute value of $a$, and get $y=a|x|$)</p>
<p>For example, if $a=1$:
<img src="https://s3.amazonaws.com/grapher/exports/4wgn8ag4pq.png" alt="Graph of $y=1|x|$">
But, if $a=2$,
<img src="https://s3.amazonaws.com/grapher/exports/kufxjl3lo5.png" alt="Graph of $y=2|x|$"></p>
Answer 18: <p>For starters,</p>
<p>$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$</p>
<p>which is just a taller or shorter $V$ shaped graph.</p>
|
Evaluate $\int_{0}^{1}\left ( x+\frac{x}{x+\frac{x}{x+\frac{x}{x+\cdots }}} \right )dx$ | 2,105,428 | 5 | https://math.stackexchange.com/questions/2105428/evaluate-int-01-left-x-fracxx-fracxx-fracxx-cdots-rig | <p>How to evaluate the following integral
$$\int_0^1 \left ( x+\cfrac{x}{x+\cfrac{x}{x+\cfrac{x}{x+\cdots }}} \right )dx$$
I have no idea how to deal with the continued fraction.</p>
| [
{
"answer_id": 2105432,
"score": 5,
"text": "<p><strong>HINT</strong>$$f(x)=x+\\frac{x}{x+\\dfrac{x}{x+\\dfrac{x}{x+\\cdots }}} \\implies f(x)=x+\\frac{x}{f(x)} \\implies f(x)^2-xf(x)-x=0$$\nSolve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\\frac{x+\\sqrt{x^2+4x}}{2}$$\nSo we have $$\\int_{0}^{1} \\frac{x+\\sqrt{x^2+4x}}{2}\ndx$$\nCan you take it from here?</p>\n"
},
{
"answer_id": 2105435,
"score": 3,
"text": "<p>Let the expression given under integral be A. Hnece\n$$x+\\frac{x}{A}=A$$\nSo\n$$A=\\frac{x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}}{2}$$\nNow the integral become\n$$\\frac{1}{2}\\int_0^1 x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}\\, \\mathrm{d}x$$\n hope you can take it from here.</p>\n"
},
{
"answer_id": 2105436,
"score": 3,
"text": "<p>If</p>\n\n<p>$$y=x+\\cfrac x{x+\\cfrac x{x+\\cfrac x\\ddots}}$$</p>\n\n<p>then it follows that</p>\n\n<p>$$y=x+\\frac xy$$</p>\n\n<p>Which upon solving (with $y\\ge0$),</p>\n\n<p>$$y=\\frac{x+\\sqrt{x^2+4x}}2$$</p>\n\n<p>And thus, we have</p>\n\n<p>$$\\int_0^1\\frac{x+\\sqrt{x^2+4x}}2\\ dx=\\frac{1+3\\sqrt5}4-2\\operatorname{csch}^{-1}(2)$$</p>\n\n<p>where we used the inverse hyperbolic cosecant.</p>\n"
},
{
"answer_id": 2105432,
"score": 5,
"text": "<p><strong>HINT</strong>$$f(x)=x+\\frac{x}{x+\\dfrac{x}{x+\\dfrac{x}{x+\\cdots }}} \\implies f(x)=x+\\frac{x}{f(x)} \\implies f(x)^2-xf(x)-x=0$$\nSolve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\\frac{x+\\sqrt{x^2+4x}}{2}$$\nSo we have $$\\int_{0}^{1} \\frac{x+\\sqrt{x^2+4x}}{2}\ndx$$\nCan you take it from here?</p>\n"
},
{
"answer_id": 2105435,
"score": 3,
"text": "<p>Let the expression given under integral be A. Hnece\n$$x+\\frac{x}{A}=A$$\nSo\n$$A=\\frac{x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}}{2}$$\nNow the integral become\n$$\\frac{1}{2}\\int_0^1 x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}\\, \\mathrm{d}x$$\n hope you can take it from here.</p>\n"
},
{
"answer_id": 2105436,
"score": 3,
"text": "<p>If</p>\n\n<p>$$y=x+\\cfrac x{x+\\cfrac x{x+\\cfrac x\\ddots}}$$</p>\n\n<p>then it follows that</p>\n\n<p>$$y=x+\\frac xy$$</p>\n\n<p>Which upon solving (with $y\\ge0$),</p>\n\n<p>$$y=\\frac{x+\\sqrt{x^2+4x}}2$$</p>\n\n<p>And thus, we have</p>\n\n<p>$$\\int_0^1\\frac{x+\\sqrt{x^2+4x}}2\\ dx=\\frac{1+3\\sqrt5}4-2\\operatorname{csch}^{-1}(2)$$</p>\n\n<p>where we used the inverse hyperbolic cosecant.</p>\n"
},
{
"answer_id": 2105432,
"score": 5,
"text": "<p><strong>HINT</strong>$$f(x)=x+\\frac{x}{x+\\dfrac{x}{x+\\dfrac{x}{x+\\cdots }}} \\implies f(x)=x+\\frac{x}{f(x)} \\implies f(x)^2-xf(x)-x=0$$\nSolve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\\frac{x+\\sqrt{x^2+4x}}{2}$$\nSo we have $$\\int_{0}^{1} \\frac{x+\\sqrt{x^2+4x}}{2}\ndx$$\nCan you take it from here?</p>\n"
},
{
"answer_id": 2105435,
"score": 3,
"text": "<p>Let the expression given under integral be A. Hnece\n$$x+\\frac{x}{A}=A$$\nSo\n$$A=\\frac{x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}}{2}$$\nNow the integral become\n$$\\frac{1}{2}\\int_0^1 x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}\\, \\mathrm{d}x$$\n hope you can take it from here.</p>\n"
},
{
"answer_id": 2105436,
"score": 3,
"text": "<p>If</p>\n\n<p>$$y=x+\\cfrac x{x+\\cfrac x{x+\\cfrac x\\ddots}}$$</p>\n\n<p>then it follows that</p>\n\n<p>$$y=x+\\frac xy$$</p>\n\n<p>Which upon solving (with $y\\ge0$),</p>\n\n<p>$$y=\\frac{x+\\sqrt{x^2+4x}}2$$</p>\n\n<p>And thus, we have</p>\n\n<p>$$\\int_0^1\\frac{x+\\sqrt{x^2+4x}}2\\ dx=\\frac{1+3\\sqrt5}4-2\\operatorname{csch}^{-1}(2)$$</p>\n\n<p>where we used the inverse hyperbolic cosecant.</p>\n"
},
{
"answer_id": 2105432,
"score": 5,
"text": "<p><strong>HINT</strong>$$f(x)=x+\\frac{x}{x+\\dfrac{x}{x+\\dfrac{x}{x+\\cdots }}} \\implies f(x)=x+\\frac{x}{f(x)} \\implies f(x)^2-xf(x)-x=0$$\nSolve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\\frac{x+\\sqrt{x^2+4x}}{2}$$\nSo we have $$\\int_{0}^{1} \\frac{x+\\sqrt{x^2+4x}}{2}\ndx$$\nCan you take it from here?</p>\n"
},
{
"answer_id": 2105435,
"score": 3,
"text": "<p>Let the expression given under integral be A. Hnece\n$$x+\\frac{x}{A}=A$$\nSo\n$$A=\\frac{x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}}{2}$$\nNow the integral become\n$$\\frac{1}{2}\\int_0^1 x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}\\, \\mathrm{d}x$$\n hope you can take it from here.</p>\n"
},
{
"answer_id": 2105436,
"score": 3,
"text": "<p>If</p>\n\n<p>$$y=x+\\cfrac x{x+\\cfrac x{x+\\cfrac x\\ddots}}$$</p>\n\n<p>then it follows that</p>\n\n<p>$$y=x+\\frac xy$$</p>\n\n<p>Which upon solving (with $y\\ge0$),</p>\n\n<p>$$y=\\frac{x+\\sqrt{x^2+4x}}2$$</p>\n\n<p>And thus, we have</p>\n\n<p>$$\\int_0^1\\frac{x+\\sqrt{x^2+4x}}2\\ dx=\\frac{1+3\\sqrt5}4-2\\operatorname{csch}^{-1}(2)$$</p>\n\n<p>where we used the inverse hyperbolic cosecant.</p>\n"
},
{
"answer_id": 2105432,
"score": 5,
"text": "<p><strong>HINT</strong>$$f(x)=x+\\frac{x}{x+\\dfrac{x}{x+\\dfrac{x}{x+\\cdots }}} \\implies f(x)=x+\\frac{x}{f(x)} \\implies f(x)^2-xf(x)-x=0$$\nSolve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\\frac{x+\\sqrt{x^2+4x}}{2}$$\nSo we have $$\\int_{0}^{1} \\frac{x+\\sqrt{x^2+4x}}{2}\ndx$$\nCan you take it from here?</p>\n"
},
{
"answer_id": 2105435,
"score": 3,
"text": "<p>Let the expression given under integral be A. Hnece\n$$x+\\frac{x}{A}=A$$\nSo\n$$A=\\frac{x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}}{2}$$\nNow the integral become\n$$\\frac{1}{2}\\int_0^1 x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}\\, \\mathrm{d}x$$\n hope you can take it from here.</p>\n"
},
{
"answer_id": 2105436,
"score": 3,
"text": "<p>If</p>\n\n<p>$$y=x+\\cfrac x{x+\\cfrac x{x+\\cfrac x\\ddots}}$$</p>\n\n<p>then it follows that</p>\n\n<p>$$y=x+\\frac xy$$</p>\n\n<p>Which upon solving (with $y\\ge0$),</p>\n\n<p>$$y=\\frac{x+\\sqrt{x^2+4x}}2$$</p>\n\n<p>And thus, we have</p>\n\n<p>$$\\int_0^1\\frac{x+\\sqrt{x^2+4x}}2\\ dx=\\frac{1+3\\sqrt5}4-2\\operatorname{csch}^{-1}(2)$$</p>\n\n<p>where we used the inverse hyperbolic cosecant.</p>\n"
},
{
"answer_id": 2105432,
"score": 5,
"text": "<p><strong>HINT</strong>$$f(x)=x+\\frac{x}{x+\\dfrac{x}{x+\\dfrac{x}{x+\\cdots }}} \\implies f(x)=x+\\frac{x}{f(x)} \\implies f(x)^2-xf(x)-x=0$$\nSolve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\\frac{x+\\sqrt{x^2+4x}}{2}$$\nSo we have $$\\int_{0}^{1} \\frac{x+\\sqrt{x^2+4x}}{2}\ndx$$\nCan you take it from here?</p>\n"
},
{
"answer_id": 2105435,
"score": 3,
"text": "<p>Let the expression given under integral be A. Hnece\n$$x+\\frac{x}{A}=A$$\nSo\n$$A=\\frac{x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}}{2}$$\nNow the integral become\n$$\\frac{1}{2}\\int_0^1 x+\\sqrt{\\left ( x+2 \\right )^{2}-2^{2}}\\, \\mathrm{d}x$$\n hope you can take it from here.</p>\n"
},
{
"answer_id": 2105436,
"score": 3,
"text": "<p>If</p>\n\n<p>$$y=x+\\cfrac x{x+\\cfrac x{x+\\cfrac x\\ddots}}$$</p>\n\n<p>then it follows that</p>\n\n<p>$$y=x+\\frac xy$$</p>\n\n<p>Which upon solving (with $y\\ge0$),</p>\n\n<p>$$y=\\frac{x+\\sqrt{x^2+4x}}2$$</p>\n\n<p>And thus, we have</p>\n\n<p>$$\\int_0^1\\frac{x+\\sqrt{x^2+4x}}2\\ dx=\\frac{1+3\\sqrt5}4-2\\operatorname{csch}^{-1}(2)$$</p>\n\n<p>where we used the inverse hyperbolic cosecant.</p>\n"
}
] | Question: Evaluate $\int_{0}^{1}\left ( x+\frac{x}{x+\frac{x}{x+\frac{x}{x+\cdots }}} \right )dx$
Answer 1: <p><strong>HINT</strong>$$f(x)=x+\frac{x}{x+\dfrac{x}{x+\dfrac{x}{x+\cdots }}} \implies f(x)=x+\frac{x}{f(x)} \implies f(x)^2-xf(x)-x=0$$
Solve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\frac{x+\sqrt{x^2+4x}}{2}$$
So we have $$\int_{0}^{1} \frac{x+\sqrt{x^2+4x}}{2}
dx$$
Can you take it from here?</p>
Answer 2: <p>Let the expression given under integral be A. Hnece
$$x+\frac{x}{A}=A$$
So
$$A=\frac{x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}}{2}$$
Now the integral become
$$\frac{1}{2}\int_0^1 x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}\, \mathrm{d}x$$
hope you can take it from here.</p>
Answer 3: <p>If</p>
<p>$$y=x+\cfrac x{x+\cfrac x{x+\cfrac x\ddots}}$$</p>
<p>then it follows that</p>
<p>$$y=x+\frac xy$$</p>
<p>Which upon solving (with $y\ge0$),</p>
<p>$$y=\frac{x+\sqrt{x^2+4x}}2$$</p>
<p>And thus, we have</p>
<p>$$\int_0^1\frac{x+\sqrt{x^2+4x}}2\ dx=\frac{1+3\sqrt5}4-2\operatorname{csch}^{-1}(2)$$</p>
<p>where we used the inverse hyperbolic cosecant.</p>
Answer 4: <p><strong>HINT</strong>$$f(x)=x+\frac{x}{x+\dfrac{x}{x+\dfrac{x}{x+\cdots }}} \implies f(x)=x+\frac{x}{f(x)} \implies f(x)^2-xf(x)-x=0$$
Solve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\frac{x+\sqrt{x^2+4x}}{2}$$
So we have $$\int_{0}^{1} \frac{x+\sqrt{x^2+4x}}{2}
dx$$
Can you take it from here?</p>
Answer 5: <p>Let the expression given under integral be A. Hnece
$$x+\frac{x}{A}=A$$
So
$$A=\frac{x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}}{2}$$
Now the integral become
$$\frac{1}{2}\int_0^1 x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}\, \mathrm{d}x$$
hope you can take it from here.</p>
Answer 6: <p>If</p>
<p>$$y=x+\cfrac x{x+\cfrac x{x+\cfrac x\ddots}}$$</p>
<p>then it follows that</p>
<p>$$y=x+\frac xy$$</p>
<p>Which upon solving (with $y\ge0$),</p>
<p>$$y=\frac{x+\sqrt{x^2+4x}}2$$</p>
<p>And thus, we have</p>
<p>$$\int_0^1\frac{x+\sqrt{x^2+4x}}2\ dx=\frac{1+3\sqrt5}4-2\operatorname{csch}^{-1}(2)$$</p>
<p>where we used the inverse hyperbolic cosecant.</p>
Answer 7: <p><strong>HINT</strong>$$f(x)=x+\frac{x}{x+\dfrac{x}{x+\dfrac{x}{x+\cdots }}} \implies f(x)=x+\frac{x}{f(x)} \implies f(x)^2-xf(x)-x=0$$
Solve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\frac{x+\sqrt{x^2+4x}}{2}$$
So we have $$\int_{0}^{1} \frac{x+\sqrt{x^2+4x}}{2}
dx$$
Can you take it from here?</p>
Answer 8: <p>Let the expression given under integral be A. Hnece
$$x+\frac{x}{A}=A$$
So
$$A=\frac{x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}}{2}$$
Now the integral become
$$\frac{1}{2}\int_0^1 x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}\, \mathrm{d}x$$
hope you can take it from here.</p>
Answer 9: <p>If</p>
<p>$$y=x+\cfrac x{x+\cfrac x{x+\cfrac x\ddots}}$$</p>
<p>then it follows that</p>
<p>$$y=x+\frac xy$$</p>
<p>Which upon solving (with $y\ge0$),</p>
<p>$$y=\frac{x+\sqrt{x^2+4x}}2$$</p>
<p>And thus, we have</p>
<p>$$\int_0^1\frac{x+\sqrt{x^2+4x}}2\ dx=\frac{1+3\sqrt5}4-2\operatorname{csch}^{-1}(2)$$</p>
<p>where we used the inverse hyperbolic cosecant.</p>
Answer 10: <p><strong>HINT</strong>$$f(x)=x+\frac{x}{x+\dfrac{x}{x+\dfrac{x}{x+\cdots }}} \implies f(x)=x+\frac{x}{f(x)} \implies f(x)^2-xf(x)-x=0$$
Solve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\frac{x+\sqrt{x^2+4x}}{2}$$
So we have $$\int_{0}^{1} \frac{x+\sqrt{x^2+4x}}{2}
dx$$
Can you take it from here?</p>
Answer 11: <p>Let the expression given under integral be A. Hnece
$$x+\frac{x}{A}=A$$
So
$$A=\frac{x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}}{2}$$
Now the integral become
$$\frac{1}{2}\int_0^1 x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}\, \mathrm{d}x$$
hope you can take it from here.</p>
Answer 12: <p>If</p>
<p>$$y=x+\cfrac x{x+\cfrac x{x+\cfrac x\ddots}}$$</p>
<p>then it follows that</p>
<p>$$y=x+\frac xy$$</p>
<p>Which upon solving (with $y\ge0$),</p>
<p>$$y=\frac{x+\sqrt{x^2+4x}}2$$</p>
<p>And thus, we have</p>
<p>$$\int_0^1\frac{x+\sqrt{x^2+4x}}2\ dx=\frac{1+3\sqrt5}4-2\operatorname{csch}^{-1}(2)$$</p>
<p>where we used the inverse hyperbolic cosecant.</p>
Answer 13: <p><strong>HINT</strong>$$f(x)=x+\frac{x}{x+\dfrac{x}{x+\dfrac{x}{x+\cdots }}} \implies f(x)=x+\frac{x}{f(x)} \implies f(x)^2-xf(x)-x=0$$
Solve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\frac{x+\sqrt{x^2+4x}}{2}$$
So we have $$\int_{0}^{1} \frac{x+\sqrt{x^2+4x}}{2}
dx$$
Can you take it from here?</p>
Answer 14: <p>Let the expression given under integral be A. Hnece
$$x+\frac{x}{A}=A$$
So
$$A=\frac{x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}}{2}$$
Now the integral become
$$\frac{1}{2}\int_0^1 x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}\, \mathrm{d}x$$
hope you can take it from here.</p>
Answer 15: <p>If</p>
<p>$$y=x+\cfrac x{x+\cfrac x{x+\cfrac x\ddots}}$$</p>
<p>then it follows that</p>
<p>$$y=x+\frac xy$$</p>
<p>Which upon solving (with $y\ge0$),</p>
<p>$$y=\frac{x+\sqrt{x^2+4x}}2$$</p>
<p>And thus, we have</p>
<p>$$\int_0^1\frac{x+\sqrt{x^2+4x}}2\ dx=\frac{1+3\sqrt5}4-2\operatorname{csch}^{-1}(2)$$</p>
<p>where we used the inverse hyperbolic cosecant.</p>
Answer 16: <p><strong>HINT</strong>$$f(x)=x+\frac{x}{x+\dfrac{x}{x+\dfrac{x}{x+\cdots }}} \implies f(x)=x+\frac{x}{f(x)} \implies f(x)^2-xf(x)-x=0$$
Solve for $f(x)$. Use the fact that $f(x)>0$ to get $$f(x)=\frac{x+\sqrt{x^2+4x}}{2}$$
So we have $$\int_{0}^{1} \frac{x+\sqrt{x^2+4x}}{2}
dx$$
Can you take it from here?</p>
Answer 17: <p>Let the expression given under integral be A. Hnece
$$x+\frac{x}{A}=A$$
So
$$A=\frac{x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}}{2}$$
Now the integral become
$$\frac{1}{2}\int_0^1 x+\sqrt{\left ( x+2 \right )^{2}-2^{2}}\, \mathrm{d}x$$
hope you can take it from here.</p>
Answer 18: <p>If</p>
<p>$$y=x+\cfrac x{x+\cfrac x{x+\cfrac x\ddots}}$$</p>
<p>then it follows that</p>
<p>$$y=x+\frac xy$$</p>
<p>Which upon solving (with $y\ge0$),</p>
<p>$$y=\frac{x+\sqrt{x^2+4x}}2$$</p>
<p>And thus, we have</p>
<p>$$\int_0^1\frac{x+\sqrt{x^2+4x}}2\ dx=\frac{1+3\sqrt5}4-2\operatorname{csch}^{-1}(2)$$</p>
<p>where we used the inverse hyperbolic cosecant.</p>
|
Prove $\lim_{n\rightarrow \infty }\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\mathrm dt=\frac{1}{4}$ | 2,105,469 | 5 | https://math.stackexchange.com/questions/2105469/prove-lim-n-rightarrow-infty-frac1n-int-frac1n1-frac-cos2t | <p>How to prove
$$\lim_{n\rightarrow \infty }\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\mathrm dt=\frac{1}{4}$$
I used $x\to \dfrac{1}{t}$ but it didn't work.</p>
<p>Any hint?Thank you.</p>
| [
{
"answer_id": 2105474,
"score": 8,
"text": "<p>It's easy to see that when $0<t\\leq 1$, we have $1-2t^2< \\cos2t<1$, hence\n$$\\frac{1}{4t^2}-\\frac{1}{2}<\\frac{\\cos2t}{4t^2}< \\frac{1}{4t^2}$$\nso\n$$\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\left ( \\frac{1}{4t^2}-\\frac{1}{2} \\right )\\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{1}{4t^2}\\, \\mathrm{d}t$$\n$$\\Rightarrow \\frac{1}{4}-\\frac{3}{4n}+\\frac{1}{2n^2}<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{4}-\\frac{1}{4n}$$\nNow take the limit we will get the answer as wanted.</p>\n"
},
{
"answer_id": 2105533,
"score": 5,
"text": "<p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>\n\n<p>One simple and efficient approach is to use L'Hospital\"s Rule ( <a href=\"https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof\" rel=\"nofollow noreferrer\">See Note at the end of this Section</a> ). </p>\n\n<p>Note that we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac{\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt}{n}&=\\lim_{n\\to \\infty}\\frac{d}{dn}\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt\\\\\\\\\n&=\\lim_{n\\to \\infty}\\left(-\\frac{\\cos(2/n)}{4/n^2}\\times \\frac{-1}{n^2}\\right)\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<hr>\n\n<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>\n\n<p>Enforcing the substitution $t=x/n$ reveals</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac1n \\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt&=\\lim_{n\\to \\infty}\\int_1^n \\frac{\\cos(2x/n)}{4x^2} \\,dx\\\\\\\\ \n&=\\lim_{n\\to \\infty}\\int_1^\\infty \\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\lim_{n\\to \\infty}\\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\frac1{4x^2}\\,dx\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n"
},
{
"answer_id": 2105603,
"score": 1,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = {1 \\over 4}:\\ {\\large ?}}$.</p>\n</blockquote>\n\n<p>With Stolz-Ces$\\mathrm{\\grave{a}}$ro Theorem:\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n\\lim_{n \\to \\infty }\\braces{{1 \\over \\pars{n + 1} - n}\n\\bracks{\\int_{1/\\pars{n + 1}}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t -\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t}}\n\\\\[5mm] = &\\\n\\lim_{n \\to \\infty }\\int_{1/\\pars{n + 1}}^{1/n}{\\cos\\pars{2t} \\over 4t^{2}}\n\\,\\dd t =\n\\lim_{n \\to \\infty}\\bracks{\\pars{{1 \\over n} - {1 \\over n + 1}}{\\cos\\pars{2\\xi_{n}} \\over 4\\xi_{n}^{2}}}\n\\quad\\mbox{where}\\quad {1 \\over n + 1} < \\xi_{n} < {1 \\over n}\n\\end{align}\n<hr>\nMoreover $\\ds{\\pars{~\\mbox{note that}\\ \\lim_{n \\to \\infty}\\,\\,\\xi_{n} = 0~}}$,\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n{1 \\over 4}\\lim_{n \\to \\infty}\n\\bracks{{1 \\over n\\pars{n + 1}}{\\cos\\pars{2\\xi_{n}} \\over \\xi_{n}^{2}}} =\n{1 \\over 4}\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>Note that\n $\\ds{{n \\over n + 1} < n\\pars{n + 1}\\xi_{n}^{2} < {n + 1 \\over n}}$ such that\n $\\ds{\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}} = 1}$.</p>\n</blockquote>\n\n<p>From expression \\eqref{1}:\n$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = \\bbx{\\ds{1 \\over 4}}}$.</p>\n"
},
{
"answer_id": 2105768,
"score": 1,
"text": "<p>For reference, here is a solution based on integration by parts:</p>\n\n<p>\\begin{align*}\n\\int_{1/n}^{1} \\frac{\\cos 2t}{4t^2} \\, dt\n&= \\left[ - \\frac{\\cos 2t}{4t} \\right]_{1/n}^{1} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\\\\n&= \\frac{n}{4}\\cos(2/n) - \\frac{\\cos2}{4} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt.\n\\end{align*}</p>\n\n<p>The last integral term causes no harm, since $|\\sin 2t| \\leq |2t|$ and hence $ \\left| \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\right| \\leq 1$. Thus by the squeezing theorem the claim follows.</p>\n"
},
{
"answer_id": 2105474,
"score": 8,
"text": "<p>It's easy to see that when $0<t\\leq 1$, we have $1-2t^2< \\cos2t<1$, hence\n$$\\frac{1}{4t^2}-\\frac{1}{2}<\\frac{\\cos2t}{4t^2}< \\frac{1}{4t^2}$$\nso\n$$\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\left ( \\frac{1}{4t^2}-\\frac{1}{2} \\right )\\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{1}{4t^2}\\, \\mathrm{d}t$$\n$$\\Rightarrow \\frac{1}{4}-\\frac{3}{4n}+\\frac{1}{2n^2}<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{4}-\\frac{1}{4n}$$\nNow take the limit we will get the answer as wanted.</p>\n"
},
{
"answer_id": 2105533,
"score": 5,
"text": "<p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>\n\n<p>One simple and efficient approach is to use L'Hospital\"s Rule ( <a href=\"https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof\" rel=\"nofollow noreferrer\">See Note at the end of this Section</a> ). </p>\n\n<p>Note that we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac{\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt}{n}&=\\lim_{n\\to \\infty}\\frac{d}{dn}\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt\\\\\\\\\n&=\\lim_{n\\to \\infty}\\left(-\\frac{\\cos(2/n)}{4/n^2}\\times \\frac{-1}{n^2}\\right)\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<hr>\n\n<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>\n\n<p>Enforcing the substitution $t=x/n$ reveals</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac1n \\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt&=\\lim_{n\\to \\infty}\\int_1^n \\frac{\\cos(2x/n)}{4x^2} \\,dx\\\\\\\\ \n&=\\lim_{n\\to \\infty}\\int_1^\\infty \\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\lim_{n\\to \\infty}\\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\frac1{4x^2}\\,dx\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n"
},
{
"answer_id": 2105603,
"score": 1,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = {1 \\over 4}:\\ {\\large ?}}$.</p>\n</blockquote>\n\n<p>With Stolz-Ces$\\mathrm{\\grave{a}}$ro Theorem:\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n\\lim_{n \\to \\infty }\\braces{{1 \\over \\pars{n + 1} - n}\n\\bracks{\\int_{1/\\pars{n + 1}}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t -\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t}}\n\\\\[5mm] = &\\\n\\lim_{n \\to \\infty }\\int_{1/\\pars{n + 1}}^{1/n}{\\cos\\pars{2t} \\over 4t^{2}}\n\\,\\dd t =\n\\lim_{n \\to \\infty}\\bracks{\\pars{{1 \\over n} - {1 \\over n + 1}}{\\cos\\pars{2\\xi_{n}} \\over 4\\xi_{n}^{2}}}\n\\quad\\mbox{where}\\quad {1 \\over n + 1} < \\xi_{n} < {1 \\over n}\n\\end{align}\n<hr>\nMoreover $\\ds{\\pars{~\\mbox{note that}\\ \\lim_{n \\to \\infty}\\,\\,\\xi_{n} = 0~}}$,\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n{1 \\over 4}\\lim_{n \\to \\infty}\n\\bracks{{1 \\over n\\pars{n + 1}}{\\cos\\pars{2\\xi_{n}} \\over \\xi_{n}^{2}}} =\n{1 \\over 4}\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>Note that\n $\\ds{{n \\over n + 1} < n\\pars{n + 1}\\xi_{n}^{2} < {n + 1 \\over n}}$ such that\n $\\ds{\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}} = 1}$.</p>\n</blockquote>\n\n<p>From expression \\eqref{1}:\n$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = \\bbx{\\ds{1 \\over 4}}}$.</p>\n"
},
{
"answer_id": 2105768,
"score": 1,
"text": "<p>For reference, here is a solution based on integration by parts:</p>\n\n<p>\\begin{align*}\n\\int_{1/n}^{1} \\frac{\\cos 2t}{4t^2} \\, dt\n&= \\left[ - \\frac{\\cos 2t}{4t} \\right]_{1/n}^{1} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\\\\n&= \\frac{n}{4}\\cos(2/n) - \\frac{\\cos2}{4} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt.\n\\end{align*}</p>\n\n<p>The last integral term causes no harm, since $|\\sin 2t| \\leq |2t|$ and hence $ \\left| \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\right| \\leq 1$. Thus by the squeezing theorem the claim follows.</p>\n"
},
{
"answer_id": 2105474,
"score": 8,
"text": "<p>It's easy to see that when $0<t\\leq 1$, we have $1-2t^2< \\cos2t<1$, hence\n$$\\frac{1}{4t^2}-\\frac{1}{2}<\\frac{\\cos2t}{4t^2}< \\frac{1}{4t^2}$$\nso\n$$\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\left ( \\frac{1}{4t^2}-\\frac{1}{2} \\right )\\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{1}{4t^2}\\, \\mathrm{d}t$$\n$$\\Rightarrow \\frac{1}{4}-\\frac{3}{4n}+\\frac{1}{2n^2}<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{4}-\\frac{1}{4n}$$\nNow take the limit we will get the answer as wanted.</p>\n"
},
{
"answer_id": 2105533,
"score": 5,
"text": "<p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>\n\n<p>One simple and efficient approach is to use L'Hospital\"s Rule ( <a href=\"https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof\" rel=\"nofollow noreferrer\">See Note at the end of this Section</a> ). </p>\n\n<p>Note that we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac{\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt}{n}&=\\lim_{n\\to \\infty}\\frac{d}{dn}\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt\\\\\\\\\n&=\\lim_{n\\to \\infty}\\left(-\\frac{\\cos(2/n)}{4/n^2}\\times \\frac{-1}{n^2}\\right)\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<hr>\n\n<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>\n\n<p>Enforcing the substitution $t=x/n$ reveals</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac1n \\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt&=\\lim_{n\\to \\infty}\\int_1^n \\frac{\\cos(2x/n)}{4x^2} \\,dx\\\\\\\\ \n&=\\lim_{n\\to \\infty}\\int_1^\\infty \\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\lim_{n\\to \\infty}\\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\frac1{4x^2}\\,dx\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n"
},
{
"answer_id": 2105603,
"score": 1,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = {1 \\over 4}:\\ {\\large ?}}$.</p>\n</blockquote>\n\n<p>With Stolz-Ces$\\mathrm{\\grave{a}}$ro Theorem:\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n\\lim_{n \\to \\infty }\\braces{{1 \\over \\pars{n + 1} - n}\n\\bracks{\\int_{1/\\pars{n + 1}}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t -\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t}}\n\\\\[5mm] = &\\\n\\lim_{n \\to \\infty }\\int_{1/\\pars{n + 1}}^{1/n}{\\cos\\pars{2t} \\over 4t^{2}}\n\\,\\dd t =\n\\lim_{n \\to \\infty}\\bracks{\\pars{{1 \\over n} - {1 \\over n + 1}}{\\cos\\pars{2\\xi_{n}} \\over 4\\xi_{n}^{2}}}\n\\quad\\mbox{where}\\quad {1 \\over n + 1} < \\xi_{n} < {1 \\over n}\n\\end{align}\n<hr>\nMoreover $\\ds{\\pars{~\\mbox{note that}\\ \\lim_{n \\to \\infty}\\,\\,\\xi_{n} = 0~}}$,\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n{1 \\over 4}\\lim_{n \\to \\infty}\n\\bracks{{1 \\over n\\pars{n + 1}}{\\cos\\pars{2\\xi_{n}} \\over \\xi_{n}^{2}}} =\n{1 \\over 4}\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>Note that\n $\\ds{{n \\over n + 1} < n\\pars{n + 1}\\xi_{n}^{2} < {n + 1 \\over n}}$ such that\n $\\ds{\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}} = 1}$.</p>\n</blockquote>\n\n<p>From expression \\eqref{1}:\n$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = \\bbx{\\ds{1 \\over 4}}}$.</p>\n"
},
{
"answer_id": 2105768,
"score": 1,
"text": "<p>For reference, here is a solution based on integration by parts:</p>\n\n<p>\\begin{align*}\n\\int_{1/n}^{1} \\frac{\\cos 2t}{4t^2} \\, dt\n&= \\left[ - \\frac{\\cos 2t}{4t} \\right]_{1/n}^{1} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\\\\n&= \\frac{n}{4}\\cos(2/n) - \\frac{\\cos2}{4} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt.\n\\end{align*}</p>\n\n<p>The last integral term causes no harm, since $|\\sin 2t| \\leq |2t|$ and hence $ \\left| \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\right| \\leq 1$. Thus by the squeezing theorem the claim follows.</p>\n"
},
{
"answer_id": 2105474,
"score": 8,
"text": "<p>It's easy to see that when $0<t\\leq 1$, we have $1-2t^2< \\cos2t<1$, hence\n$$\\frac{1}{4t^2}-\\frac{1}{2}<\\frac{\\cos2t}{4t^2}< \\frac{1}{4t^2}$$\nso\n$$\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\left ( \\frac{1}{4t^2}-\\frac{1}{2} \\right )\\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{1}{4t^2}\\, \\mathrm{d}t$$\n$$\\Rightarrow \\frac{1}{4}-\\frac{3}{4n}+\\frac{1}{2n^2}<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{4}-\\frac{1}{4n}$$\nNow take the limit we will get the answer as wanted.</p>\n"
},
{
"answer_id": 2105533,
"score": 5,
"text": "<p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>\n\n<p>One simple and efficient approach is to use L'Hospital\"s Rule ( <a href=\"https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof\" rel=\"nofollow noreferrer\">See Note at the end of this Section</a> ). </p>\n\n<p>Note that we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac{\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt}{n}&=\\lim_{n\\to \\infty}\\frac{d}{dn}\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt\\\\\\\\\n&=\\lim_{n\\to \\infty}\\left(-\\frac{\\cos(2/n)}{4/n^2}\\times \\frac{-1}{n^2}\\right)\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<hr>\n\n<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>\n\n<p>Enforcing the substitution $t=x/n$ reveals</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac1n \\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt&=\\lim_{n\\to \\infty}\\int_1^n \\frac{\\cos(2x/n)}{4x^2} \\,dx\\\\\\\\ \n&=\\lim_{n\\to \\infty}\\int_1^\\infty \\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\lim_{n\\to \\infty}\\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\frac1{4x^2}\\,dx\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n"
},
{
"answer_id": 2105603,
"score": 1,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = {1 \\over 4}:\\ {\\large ?}}$.</p>\n</blockquote>\n\n<p>With Stolz-Ces$\\mathrm{\\grave{a}}$ro Theorem:\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n\\lim_{n \\to \\infty }\\braces{{1 \\over \\pars{n + 1} - n}\n\\bracks{\\int_{1/\\pars{n + 1}}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t -\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t}}\n\\\\[5mm] = &\\\n\\lim_{n \\to \\infty }\\int_{1/\\pars{n + 1}}^{1/n}{\\cos\\pars{2t} \\over 4t^{2}}\n\\,\\dd t =\n\\lim_{n \\to \\infty}\\bracks{\\pars{{1 \\over n} - {1 \\over n + 1}}{\\cos\\pars{2\\xi_{n}} \\over 4\\xi_{n}^{2}}}\n\\quad\\mbox{where}\\quad {1 \\over n + 1} < \\xi_{n} < {1 \\over n}\n\\end{align}\n<hr>\nMoreover $\\ds{\\pars{~\\mbox{note that}\\ \\lim_{n \\to \\infty}\\,\\,\\xi_{n} = 0~}}$,\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n{1 \\over 4}\\lim_{n \\to \\infty}\n\\bracks{{1 \\over n\\pars{n + 1}}{\\cos\\pars{2\\xi_{n}} \\over \\xi_{n}^{2}}} =\n{1 \\over 4}\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>Note that\n $\\ds{{n \\over n + 1} < n\\pars{n + 1}\\xi_{n}^{2} < {n + 1 \\over n}}$ such that\n $\\ds{\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}} = 1}$.</p>\n</blockquote>\n\n<p>From expression \\eqref{1}:\n$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = \\bbx{\\ds{1 \\over 4}}}$.</p>\n"
},
{
"answer_id": 2105768,
"score": 1,
"text": "<p>For reference, here is a solution based on integration by parts:</p>\n\n<p>\\begin{align*}\n\\int_{1/n}^{1} \\frac{\\cos 2t}{4t^2} \\, dt\n&= \\left[ - \\frac{\\cos 2t}{4t} \\right]_{1/n}^{1} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\\\\n&= \\frac{n}{4}\\cos(2/n) - \\frac{\\cos2}{4} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt.\n\\end{align*}</p>\n\n<p>The last integral term causes no harm, since $|\\sin 2t| \\leq |2t|$ and hence $ \\left| \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\right| \\leq 1$. Thus by the squeezing theorem the claim follows.</p>\n"
},
{
"answer_id": 2105474,
"score": 8,
"text": "<p>It's easy to see that when $0<t\\leq 1$, we have $1-2t^2< \\cos2t<1$, hence\n$$\\frac{1}{4t^2}-\\frac{1}{2}<\\frac{\\cos2t}{4t^2}< \\frac{1}{4t^2}$$\nso\n$$\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\left ( \\frac{1}{4t^2}-\\frac{1}{2} \\right )\\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{1}{4t^2}\\, \\mathrm{d}t$$\n$$\\Rightarrow \\frac{1}{4}-\\frac{3}{4n}+\\frac{1}{2n^2}<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{4}-\\frac{1}{4n}$$\nNow take the limit we will get the answer as wanted.</p>\n"
},
{
"answer_id": 2105533,
"score": 5,
"text": "<p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>\n\n<p>One simple and efficient approach is to use L'Hospital\"s Rule ( <a href=\"https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof\" rel=\"nofollow noreferrer\">See Note at the end of this Section</a> ). </p>\n\n<p>Note that we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac{\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt}{n}&=\\lim_{n\\to \\infty}\\frac{d}{dn}\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt\\\\\\\\\n&=\\lim_{n\\to \\infty}\\left(-\\frac{\\cos(2/n)}{4/n^2}\\times \\frac{-1}{n^2}\\right)\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<hr>\n\n<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>\n\n<p>Enforcing the substitution $t=x/n$ reveals</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac1n \\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt&=\\lim_{n\\to \\infty}\\int_1^n \\frac{\\cos(2x/n)}{4x^2} \\,dx\\\\\\\\ \n&=\\lim_{n\\to \\infty}\\int_1^\\infty \\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\lim_{n\\to \\infty}\\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\frac1{4x^2}\\,dx\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n"
},
{
"answer_id": 2105603,
"score": 1,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = {1 \\over 4}:\\ {\\large ?}}$.</p>\n</blockquote>\n\n<p>With Stolz-Ces$\\mathrm{\\grave{a}}$ro Theorem:\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n\\lim_{n \\to \\infty }\\braces{{1 \\over \\pars{n + 1} - n}\n\\bracks{\\int_{1/\\pars{n + 1}}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t -\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t}}\n\\\\[5mm] = &\\\n\\lim_{n \\to \\infty }\\int_{1/\\pars{n + 1}}^{1/n}{\\cos\\pars{2t} \\over 4t^{2}}\n\\,\\dd t =\n\\lim_{n \\to \\infty}\\bracks{\\pars{{1 \\over n} - {1 \\over n + 1}}{\\cos\\pars{2\\xi_{n}} \\over 4\\xi_{n}^{2}}}\n\\quad\\mbox{where}\\quad {1 \\over n + 1} < \\xi_{n} < {1 \\over n}\n\\end{align}\n<hr>\nMoreover $\\ds{\\pars{~\\mbox{note that}\\ \\lim_{n \\to \\infty}\\,\\,\\xi_{n} = 0~}}$,\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n{1 \\over 4}\\lim_{n \\to \\infty}\n\\bracks{{1 \\over n\\pars{n + 1}}{\\cos\\pars{2\\xi_{n}} \\over \\xi_{n}^{2}}} =\n{1 \\over 4}\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>Note that\n $\\ds{{n \\over n + 1} < n\\pars{n + 1}\\xi_{n}^{2} < {n + 1 \\over n}}$ such that\n $\\ds{\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}} = 1}$.</p>\n</blockquote>\n\n<p>From expression \\eqref{1}:\n$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = \\bbx{\\ds{1 \\over 4}}}$.</p>\n"
},
{
"answer_id": 2105768,
"score": 1,
"text": "<p>For reference, here is a solution based on integration by parts:</p>\n\n<p>\\begin{align*}\n\\int_{1/n}^{1} \\frac{\\cos 2t}{4t^2} \\, dt\n&= \\left[ - \\frac{\\cos 2t}{4t} \\right]_{1/n}^{1} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\\\\n&= \\frac{n}{4}\\cos(2/n) - \\frac{\\cos2}{4} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt.\n\\end{align*}</p>\n\n<p>The last integral term causes no harm, since $|\\sin 2t| \\leq |2t|$ and hence $ \\left| \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\right| \\leq 1$. Thus by the squeezing theorem the claim follows.</p>\n"
},
{
"answer_id": 2105474,
"score": 8,
"text": "<p>It's easy to see that when $0<t\\leq 1$, we have $1-2t^2< \\cos2t<1$, hence\n$$\\frac{1}{4t^2}-\\frac{1}{2}<\\frac{\\cos2t}{4t^2}< \\frac{1}{4t^2}$$\nso\n$$\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\left ( \\frac{1}{4t^2}-\\frac{1}{2} \\right )\\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{1}{4t^2}\\, \\mathrm{d}t$$\n$$\\Rightarrow \\frac{1}{4}-\\frac{3}{4n}+\\frac{1}{2n^2}<\\frac{1}{n}\\int_{\\frac{1}{n}}^{1}\\frac{\\cos2t}{4t^{2}}\\, \\mathrm{d}t<\\frac{1}{4}-\\frac{1}{4n}$$\nNow take the limit we will get the answer as wanted.</p>\n"
},
{
"answer_id": 2105533,
"score": 5,
"text": "<p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>\n\n<p>One simple and efficient approach is to use L'Hospital\"s Rule ( <a href=\"https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof\" rel=\"nofollow noreferrer\">See Note at the end of this Section</a> ). </p>\n\n<p>Note that we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac{\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt}{n}&=\\lim_{n\\to \\infty}\\frac{d}{dn}\\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt\\\\\\\\\n&=\\lim_{n\\to \\infty}\\left(-\\frac{\\cos(2/n)}{4/n^2}\\times \\frac{-1}{n^2}\\right)\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<hr>\n\n<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>\n\n<p>Enforcing the substitution $t=x/n$ reveals</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\frac1n \\int_{1/n}^1\\frac{\\cos(2t)}{4t^2}\\,dt&=\\lim_{n\\to \\infty}\\int_1^n \\frac{\\cos(2x/n)}{4x^2} \\,dx\\\\\\\\ \n&=\\lim_{n\\to \\infty}\\int_1^\\infty \\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\lim_{n\\to \\infty}\\xi_{[0,n]}\\frac{\\cos(2x/n)}{4x^2}\\,dx\\\\\\\\\n&=\\int_1^\\infty \\frac1{4x^2}\\,dx\\\\\\\\\n&=\\frac14\n\\end{align}$$</p>\n"
},
{
"answer_id": 2105603,
"score": 1,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = {1 \\over 4}:\\ {\\large ?}}$.</p>\n</blockquote>\n\n<p>With Stolz-Ces$\\mathrm{\\grave{a}}$ro Theorem:\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n\\lim_{n \\to \\infty }\\braces{{1 \\over \\pars{n + 1} - n}\n\\bracks{\\int_{1/\\pars{n + 1}}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t -\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t}}\n\\\\[5mm] = &\\\n\\lim_{n \\to \\infty }\\int_{1/\\pars{n + 1}}^{1/n}{\\cos\\pars{2t} \\over 4t^{2}}\n\\,\\dd t =\n\\lim_{n \\to \\infty}\\bracks{\\pars{{1 \\over n} - {1 \\over n + 1}}{\\cos\\pars{2\\xi_{n}} \\over 4\\xi_{n}^{2}}}\n\\quad\\mbox{where}\\quad {1 \\over n + 1} < \\xi_{n} < {1 \\over n}\n\\end{align}\n<hr>\nMoreover $\\ds{\\pars{~\\mbox{note that}\\ \\lim_{n \\to \\infty}\\,\\,\\xi_{n} = 0~}}$,\n\\begin{align}\n&\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} =\n{1 \\over 4}\\lim_{n \\to \\infty}\n\\bracks{{1 \\over n\\pars{n + 1}}{\\cos\\pars{2\\xi_{n}} \\over \\xi_{n}^{2}}} =\n{1 \\over 4}\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>Note that\n $\\ds{{n \\over n + 1} < n\\pars{n + 1}\\xi_{n}^{2} < {n + 1 \\over n}}$ such that\n $\\ds{\\lim_{n \\to \\infty}\\,{1 \\over n\\pars{n + 1}\\xi_{n}^{2}} = 1}$.</p>\n</blockquote>\n\n<p>From expression \\eqref{1}:\n$\\ds{\\lim_{n \\to \\infty }\\bracks{{1 \\over n}\n\\int_{1/n}^{1}{\\cos\\pars{2t} \\over 4t^{2}}\\,\\dd t} = \\bbx{\\ds{1 \\over 4}}}$.</p>\n"
},
{
"answer_id": 2105768,
"score": 1,
"text": "<p>For reference, here is a solution based on integration by parts:</p>\n\n<p>\\begin{align*}\n\\int_{1/n}^{1} \\frac{\\cos 2t}{4t^2} \\, dt\n&= \\left[ - \\frac{\\cos 2t}{4t} \\right]_{1/n}^{1} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\\\\n&= \\frac{n}{4}\\cos(2/n) - \\frac{\\cos2}{4} - \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt.\n\\end{align*}</p>\n\n<p>The last integral term causes no harm, since $|\\sin 2t| \\leq |2t|$ and hence $ \\left| \\int_{1/n}^{1} \\frac{\\sin 2t}{2t} \\, dt \\right| \\leq 1$. Thus by the squeezing theorem the claim follows.</p>\n"
}
] | Question: Prove $\lim_{n\rightarrow \infty }\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\mathrm dt=\frac{1}{4}$
Answer 1: <p>It's easy to see that when $0<t\leq 1$, we have $1-2t^2< \cos2t<1$, hence
$$\frac{1}{4t^2}-\frac{1}{2}<\frac{\cos2t}{4t^2}< \frac{1}{4t^2}$$
so
$$\frac{1}{n}\int_{\frac{1}{n}}^{1}\left ( \frac{1}{4t^2}-\frac{1}{2} \right )\mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{1}{4t^2}\, \mathrm{d}t$$
$$\Rightarrow \frac{1}{4}-\frac{3}{4n}+\frac{1}{2n^2}<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{4}-\frac{1}{4n}$$
Now take the limit we will get the answer as wanted.</p>
Answer 2: <p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>
<p>One simple and efficient approach is to use L'Hospital"s Rule ( <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof" rel="nofollow noreferrer">See Note at the end of this Section</a> ). </p>
<p>Note that we have</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac{\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt}{n}&=\lim_{n\to \infty}\frac{d}{dn}\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt\\\\
&=\lim_{n\to \infty}\left(-\frac{\cos(2/n)}{4/n^2}\times \frac{-1}{n^2}\right)\\\\
&=\frac14
\end{align}$$</p>
<p>as expected!</p>
<hr>
<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>
<p>Enforcing the substitution $t=x/n$ reveals</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac1n \int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt&=\lim_{n\to \infty}\int_1^n \frac{\cos(2x/n)}{4x^2} \,dx\\\\
&=\lim_{n\to \infty}\int_1^\infty \xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \lim_{n\to \infty}\xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \frac1{4x^2}\,dx\\\\
&=\frac14
\end{align}$$</p>
Answer 3: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = {1 \over 4}:\ {\large ?}}$.</p>
</blockquote>
<p>With Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
\lim_{n \to \infty }\braces{{1 \over \pars{n + 1} - n}
\bracks{\int_{1/\pars{n + 1}}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t -
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t}}
\\[5mm] = &\
\lim_{n \to \infty }\int_{1/\pars{n + 1}}^{1/n}{\cos\pars{2t} \over 4t^{2}}
\,\dd t =
\lim_{n \to \infty}\bracks{\pars{{1 \over n} - {1 \over n + 1}}{\cos\pars{2\xi_{n}} \over 4\xi_{n}^{2}}}
\quad\mbox{where}\quad {1 \over n + 1} < \xi_{n} < {1 \over n}
\end{align}
<hr>
Moreover $\ds{\pars{~\mbox{note that}\ \lim_{n \to \infty}\,\,\xi_{n} = 0~}}$,
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
{1 \over 4}\lim_{n \to \infty}
\bracks{{1 \over n\pars{n + 1}}{\cos\pars{2\xi_{n}} \over \xi_{n}^{2}}} =
{1 \over 4}\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}}\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that
$\ds{{n \over n + 1} < n\pars{n + 1}\xi_{n}^{2} < {n + 1 \over n}}$ such that
$\ds{\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}} = 1}$.</p>
</blockquote>
<p>From expression \eqref{1}:
$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = \bbx{\ds{1 \over 4}}}$.</p>
Answer 4: <p>For reference, here is a solution based on integration by parts:</p>
<p>\begin{align*}
\int_{1/n}^{1} \frac{\cos 2t}{4t^2} \, dt
&= \left[ - \frac{\cos 2t}{4t} \right]_{1/n}^{1} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \\
&= \frac{n}{4}\cos(2/n) - \frac{\cos2}{4} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt.
\end{align*}</p>
<p>The last integral term causes no harm, since $|\sin 2t| \leq |2t|$ and hence $ \left| \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \right| \leq 1$. Thus by the squeezing theorem the claim follows.</p>
Answer 5: <p>It's easy to see that when $0<t\leq 1$, we have $1-2t^2< \cos2t<1$, hence
$$\frac{1}{4t^2}-\frac{1}{2}<\frac{\cos2t}{4t^2}< \frac{1}{4t^2}$$
so
$$\frac{1}{n}\int_{\frac{1}{n}}^{1}\left ( \frac{1}{4t^2}-\frac{1}{2} \right )\mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{1}{4t^2}\, \mathrm{d}t$$
$$\Rightarrow \frac{1}{4}-\frac{3}{4n}+\frac{1}{2n^2}<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{4}-\frac{1}{4n}$$
Now take the limit we will get the answer as wanted.</p>
Answer 6: <p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>
<p>One simple and efficient approach is to use L'Hospital"s Rule ( <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof" rel="nofollow noreferrer">See Note at the end of this Section</a> ). </p>
<p>Note that we have</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac{\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt}{n}&=\lim_{n\to \infty}\frac{d}{dn}\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt\\\\
&=\lim_{n\to \infty}\left(-\frac{\cos(2/n)}{4/n^2}\times \frac{-1}{n^2}\right)\\\\
&=\frac14
\end{align}$$</p>
<p>as expected!</p>
<hr>
<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>
<p>Enforcing the substitution $t=x/n$ reveals</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac1n \int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt&=\lim_{n\to \infty}\int_1^n \frac{\cos(2x/n)}{4x^2} \,dx\\\\
&=\lim_{n\to \infty}\int_1^\infty \xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \lim_{n\to \infty}\xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \frac1{4x^2}\,dx\\\\
&=\frac14
\end{align}$$</p>
Answer 7: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = {1 \over 4}:\ {\large ?}}$.</p>
</blockquote>
<p>With Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
\lim_{n \to \infty }\braces{{1 \over \pars{n + 1} - n}
\bracks{\int_{1/\pars{n + 1}}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t -
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t}}
\\[5mm] = &\
\lim_{n \to \infty }\int_{1/\pars{n + 1}}^{1/n}{\cos\pars{2t} \over 4t^{2}}
\,\dd t =
\lim_{n \to \infty}\bracks{\pars{{1 \over n} - {1 \over n + 1}}{\cos\pars{2\xi_{n}} \over 4\xi_{n}^{2}}}
\quad\mbox{where}\quad {1 \over n + 1} < \xi_{n} < {1 \over n}
\end{align}
<hr>
Moreover $\ds{\pars{~\mbox{note that}\ \lim_{n \to \infty}\,\,\xi_{n} = 0~}}$,
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
{1 \over 4}\lim_{n \to \infty}
\bracks{{1 \over n\pars{n + 1}}{\cos\pars{2\xi_{n}} \over \xi_{n}^{2}}} =
{1 \over 4}\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}}\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that
$\ds{{n \over n + 1} < n\pars{n + 1}\xi_{n}^{2} < {n + 1 \over n}}$ such that
$\ds{\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}} = 1}$.</p>
</blockquote>
<p>From expression \eqref{1}:
$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = \bbx{\ds{1 \over 4}}}$.</p>
Answer 8: <p>For reference, here is a solution based on integration by parts:</p>
<p>\begin{align*}
\int_{1/n}^{1} \frac{\cos 2t}{4t^2} \, dt
&= \left[ - \frac{\cos 2t}{4t} \right]_{1/n}^{1} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \\
&= \frac{n}{4}\cos(2/n) - \frac{\cos2}{4} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt.
\end{align*}</p>
<p>The last integral term causes no harm, since $|\sin 2t| \leq |2t|$ and hence $ \left| \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \right| \leq 1$. Thus by the squeezing theorem the claim follows.</p>
Answer 9: <p>It's easy to see that when $0<t\leq 1$, we have $1-2t^2< \cos2t<1$, hence
$$\frac{1}{4t^2}-\frac{1}{2}<\frac{\cos2t}{4t^2}< \frac{1}{4t^2}$$
so
$$\frac{1}{n}\int_{\frac{1}{n}}^{1}\left ( \frac{1}{4t^2}-\frac{1}{2} \right )\mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{1}{4t^2}\, \mathrm{d}t$$
$$\Rightarrow \frac{1}{4}-\frac{3}{4n}+\frac{1}{2n^2}<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{4}-\frac{1}{4n}$$
Now take the limit we will get the answer as wanted.</p>
Answer 10: <p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>
<p>One simple and efficient approach is to use L'Hospital"s Rule ( <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof" rel="nofollow noreferrer">See Note at the end of this Section</a> ). </p>
<p>Note that we have</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac{\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt}{n}&=\lim_{n\to \infty}\frac{d}{dn}\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt\\\\
&=\lim_{n\to \infty}\left(-\frac{\cos(2/n)}{4/n^2}\times \frac{-1}{n^2}\right)\\\\
&=\frac14
\end{align}$$</p>
<p>as expected!</p>
<hr>
<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>
<p>Enforcing the substitution $t=x/n$ reveals</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac1n \int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt&=\lim_{n\to \infty}\int_1^n \frac{\cos(2x/n)}{4x^2} \,dx\\\\
&=\lim_{n\to \infty}\int_1^\infty \xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \lim_{n\to \infty}\xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \frac1{4x^2}\,dx\\\\
&=\frac14
\end{align}$$</p>
Answer 11: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = {1 \over 4}:\ {\large ?}}$.</p>
</blockquote>
<p>With Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
\lim_{n \to \infty }\braces{{1 \over \pars{n + 1} - n}
\bracks{\int_{1/\pars{n + 1}}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t -
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t}}
\\[5mm] = &\
\lim_{n \to \infty }\int_{1/\pars{n + 1}}^{1/n}{\cos\pars{2t} \over 4t^{2}}
\,\dd t =
\lim_{n \to \infty}\bracks{\pars{{1 \over n} - {1 \over n + 1}}{\cos\pars{2\xi_{n}} \over 4\xi_{n}^{2}}}
\quad\mbox{where}\quad {1 \over n + 1} < \xi_{n} < {1 \over n}
\end{align}
<hr>
Moreover $\ds{\pars{~\mbox{note that}\ \lim_{n \to \infty}\,\,\xi_{n} = 0~}}$,
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
{1 \over 4}\lim_{n \to \infty}
\bracks{{1 \over n\pars{n + 1}}{\cos\pars{2\xi_{n}} \over \xi_{n}^{2}}} =
{1 \over 4}\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}}\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that
$\ds{{n \over n + 1} < n\pars{n + 1}\xi_{n}^{2} < {n + 1 \over n}}$ such that
$\ds{\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}} = 1}$.</p>
</blockquote>
<p>From expression \eqref{1}:
$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = \bbx{\ds{1 \over 4}}}$.</p>
Answer 12: <p>For reference, here is a solution based on integration by parts:</p>
<p>\begin{align*}
\int_{1/n}^{1} \frac{\cos 2t}{4t^2} \, dt
&= \left[ - \frac{\cos 2t}{4t} \right]_{1/n}^{1} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \\
&= \frac{n}{4}\cos(2/n) - \frac{\cos2}{4} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt.
\end{align*}</p>
<p>The last integral term causes no harm, since $|\sin 2t| \leq |2t|$ and hence $ \left| \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \right| \leq 1$. Thus by the squeezing theorem the claim follows.</p>
Answer 13: <p>It's easy to see that when $0<t\leq 1$, we have $1-2t^2< \cos2t<1$, hence
$$\frac{1}{4t^2}-\frac{1}{2}<\frac{\cos2t}{4t^2}< \frac{1}{4t^2}$$
so
$$\frac{1}{n}\int_{\frac{1}{n}}^{1}\left ( \frac{1}{4t^2}-\frac{1}{2} \right )\mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{1}{4t^2}\, \mathrm{d}t$$
$$\Rightarrow \frac{1}{4}-\frac{3}{4n}+\frac{1}{2n^2}<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{4}-\frac{1}{4n}$$
Now take the limit we will get the answer as wanted.</p>
Answer 14: <p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>
<p>One simple and efficient approach is to use L'Hospital"s Rule ( <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof" rel="nofollow noreferrer">See Note at the end of this Section</a> ). </p>
<p>Note that we have</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac{\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt}{n}&=\lim_{n\to \infty}\frac{d}{dn}\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt\\\\
&=\lim_{n\to \infty}\left(-\frac{\cos(2/n)}{4/n^2}\times \frac{-1}{n^2}\right)\\\\
&=\frac14
\end{align}$$</p>
<p>as expected!</p>
<hr>
<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>
<p>Enforcing the substitution $t=x/n$ reveals</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac1n \int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt&=\lim_{n\to \infty}\int_1^n \frac{\cos(2x/n)}{4x^2} \,dx\\\\
&=\lim_{n\to \infty}\int_1^\infty \xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \lim_{n\to \infty}\xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \frac1{4x^2}\,dx\\\\
&=\frac14
\end{align}$$</p>
Answer 15: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = {1 \over 4}:\ {\large ?}}$.</p>
</blockquote>
<p>With Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
\lim_{n \to \infty }\braces{{1 \over \pars{n + 1} - n}
\bracks{\int_{1/\pars{n + 1}}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t -
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t}}
\\[5mm] = &\
\lim_{n \to \infty }\int_{1/\pars{n + 1}}^{1/n}{\cos\pars{2t} \over 4t^{2}}
\,\dd t =
\lim_{n \to \infty}\bracks{\pars{{1 \over n} - {1 \over n + 1}}{\cos\pars{2\xi_{n}} \over 4\xi_{n}^{2}}}
\quad\mbox{where}\quad {1 \over n + 1} < \xi_{n} < {1 \over n}
\end{align}
<hr>
Moreover $\ds{\pars{~\mbox{note that}\ \lim_{n \to \infty}\,\,\xi_{n} = 0~}}$,
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
{1 \over 4}\lim_{n \to \infty}
\bracks{{1 \over n\pars{n + 1}}{\cos\pars{2\xi_{n}} \over \xi_{n}^{2}}} =
{1 \over 4}\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}}\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that
$\ds{{n \over n + 1} < n\pars{n + 1}\xi_{n}^{2} < {n + 1 \over n}}$ such that
$\ds{\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}} = 1}$.</p>
</blockquote>
<p>From expression \eqref{1}:
$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = \bbx{\ds{1 \over 4}}}$.</p>
Answer 16: <p>For reference, here is a solution based on integration by parts:</p>
<p>\begin{align*}
\int_{1/n}^{1} \frac{\cos 2t}{4t^2} \, dt
&= \left[ - \frac{\cos 2t}{4t} \right]_{1/n}^{1} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \\
&= \frac{n}{4}\cos(2/n) - \frac{\cos2}{4} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt.
\end{align*}</p>
<p>The last integral term causes no harm, since $|\sin 2t| \leq |2t|$ and hence $ \left| \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \right| \leq 1$. Thus by the squeezing theorem the claim follows.</p>
Answer 17: <p>It's easy to see that when $0<t\leq 1$, we have $1-2t^2< \cos2t<1$, hence
$$\frac{1}{4t^2}-\frac{1}{2}<\frac{\cos2t}{4t^2}< \frac{1}{4t^2}$$
so
$$\frac{1}{n}\int_{\frac{1}{n}}^{1}\left ( \frac{1}{4t^2}-\frac{1}{2} \right )\mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{1}{4t^2}\, \mathrm{d}t$$
$$\Rightarrow \frac{1}{4}-\frac{3}{4n}+\frac{1}{2n^2}<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{4}-\frac{1}{4n}$$
Now take the limit we will get the answer as wanted.</p>
Answer 18: <p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>
<p>One simple and efficient approach is to use L'Hospital"s Rule ( <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof" rel="nofollow noreferrer">See Note at the end of this Section</a> ). </p>
<p>Note that we have</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac{\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt}{n}&=\lim_{n\to \infty}\frac{d}{dn}\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt\\\\
&=\lim_{n\to \infty}\left(-\frac{\cos(2/n)}{4/n^2}\times \frac{-1}{n^2}\right)\\\\
&=\frac14
\end{align}$$</p>
<p>as expected!</p>
<hr>
<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>
<p>Enforcing the substitution $t=x/n$ reveals</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac1n \int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt&=\lim_{n\to \infty}\int_1^n \frac{\cos(2x/n)}{4x^2} \,dx\\\\
&=\lim_{n\to \infty}\int_1^\infty \xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \lim_{n\to \infty}\xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \frac1{4x^2}\,dx\\\\
&=\frac14
\end{align}$$</p>
Answer 19: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = {1 \over 4}:\ {\large ?}}$.</p>
</blockquote>
<p>With Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
\lim_{n \to \infty }\braces{{1 \over \pars{n + 1} - n}
\bracks{\int_{1/\pars{n + 1}}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t -
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t}}
\\[5mm] = &\
\lim_{n \to \infty }\int_{1/\pars{n + 1}}^{1/n}{\cos\pars{2t} \over 4t^{2}}
\,\dd t =
\lim_{n \to \infty}\bracks{\pars{{1 \over n} - {1 \over n + 1}}{\cos\pars{2\xi_{n}} \over 4\xi_{n}^{2}}}
\quad\mbox{where}\quad {1 \over n + 1} < \xi_{n} < {1 \over n}
\end{align}
<hr>
Moreover $\ds{\pars{~\mbox{note that}\ \lim_{n \to \infty}\,\,\xi_{n} = 0~}}$,
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
{1 \over 4}\lim_{n \to \infty}
\bracks{{1 \over n\pars{n + 1}}{\cos\pars{2\xi_{n}} \over \xi_{n}^{2}}} =
{1 \over 4}\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}}\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that
$\ds{{n \over n + 1} < n\pars{n + 1}\xi_{n}^{2} < {n + 1 \over n}}$ such that
$\ds{\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}} = 1}$.</p>
</blockquote>
<p>From expression \eqref{1}:
$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = \bbx{\ds{1 \over 4}}}$.</p>
Answer 20: <p>For reference, here is a solution based on integration by parts:</p>
<p>\begin{align*}
\int_{1/n}^{1} \frac{\cos 2t}{4t^2} \, dt
&= \left[ - \frac{\cos 2t}{4t} \right]_{1/n}^{1} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \\
&= \frac{n}{4}\cos(2/n) - \frac{\cos2}{4} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt.
\end{align*}</p>
<p>The last integral term causes no harm, since $|\sin 2t| \leq |2t|$ and hence $ \left| \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \right| \leq 1$. Thus by the squeezing theorem the claim follows.</p>
Answer 21: <p>It's easy to see that when $0<t\leq 1$, we have $1-2t^2< \cos2t<1$, hence
$$\frac{1}{4t^2}-\frac{1}{2}<\frac{\cos2t}{4t^2}< \frac{1}{4t^2}$$
so
$$\frac{1}{n}\int_{\frac{1}{n}}^{1}\left ( \frac{1}{4t^2}-\frac{1}{2} \right )\mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{1}{4t^2}\, \mathrm{d}t$$
$$\Rightarrow \frac{1}{4}-\frac{3}{4n}+\frac{1}{2n^2}<\frac{1}{n}\int_{\frac{1}{n}}^{1}\frac{\cos2t}{4t^{2}}\, \mathrm{d}t<\frac{1}{4}-\frac{1}{4n}$$
Now take the limit we will get the answer as wanted.</p>
Answer 22: <p><strong>METHODOLOGY $1$: L'Hospital's. Rule</strong></p>
<p>One simple and efficient approach is to use L'Hospital"s Rule ( <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule#General_proof" rel="nofollow noreferrer">See Note at the end of this Section</a> ). </p>
<p>Note that we have</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac{\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt}{n}&=\lim_{n\to \infty}\frac{d}{dn}\int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt\\\\
&=\lim_{n\to \infty}\left(-\frac{\cos(2/n)}{4/n^2}\times \frac{-1}{n^2}\right)\\\\
&=\frac14
\end{align}$$</p>
<p>as expected!</p>
<hr>
<p><strong>METHODOLOGY $2$: Change of Variable and the Dominated Convergence Theorem</strong></p>
<p>Enforcing the substitution $t=x/n$ reveals</p>
<p>$$\begin{align}
\lim_{n\to \infty}\frac1n \int_{1/n}^1\frac{\cos(2t)}{4t^2}\,dt&=\lim_{n\to \infty}\int_1^n \frac{\cos(2x/n)}{4x^2} \,dx\\\\
&=\lim_{n\to \infty}\int_1^\infty \xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \lim_{n\to \infty}\xi_{[0,n]}\frac{\cos(2x/n)}{4x^2}\,dx\\\\
&=\int_1^\infty \frac1{4x^2}\,dx\\\\
&=\frac14
\end{align}$$</p>
Answer 23: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = {1 \over 4}:\ {\large ?}}$.</p>
</blockquote>
<p>With Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
\lim_{n \to \infty }\braces{{1 \over \pars{n + 1} - n}
\bracks{\int_{1/\pars{n + 1}}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t -
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t}}
\\[5mm] = &\
\lim_{n \to \infty }\int_{1/\pars{n + 1}}^{1/n}{\cos\pars{2t} \over 4t^{2}}
\,\dd t =
\lim_{n \to \infty}\bracks{\pars{{1 \over n} - {1 \over n + 1}}{\cos\pars{2\xi_{n}} \over 4\xi_{n}^{2}}}
\quad\mbox{where}\quad {1 \over n + 1} < \xi_{n} < {1 \over n}
\end{align}
<hr>
Moreover $\ds{\pars{~\mbox{note that}\ \lim_{n \to \infty}\,\,\xi_{n} = 0~}}$,
\begin{align}
&\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} =
{1 \over 4}\lim_{n \to \infty}
\bracks{{1 \over n\pars{n + 1}}{\cos\pars{2\xi_{n}} \over \xi_{n}^{2}}} =
{1 \over 4}\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}}\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that
$\ds{{n \over n + 1} < n\pars{n + 1}\xi_{n}^{2} < {n + 1 \over n}}$ such that
$\ds{\lim_{n \to \infty}\,{1 \over n\pars{n + 1}\xi_{n}^{2}} = 1}$.</p>
</blockquote>
<p>From expression \eqref{1}:
$\ds{\lim_{n \to \infty }\bracks{{1 \over n}
\int_{1/n}^{1}{\cos\pars{2t} \over 4t^{2}}\,\dd t} = \bbx{\ds{1 \over 4}}}$.</p>
Answer 24: <p>For reference, here is a solution based on integration by parts:</p>
<p>\begin{align*}
\int_{1/n}^{1} \frac{\cos 2t}{4t^2} \, dt
&= \left[ - \frac{\cos 2t}{4t} \right]_{1/n}^{1} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \\
&= \frac{n}{4}\cos(2/n) - \frac{\cos2}{4} - \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt.
\end{align*}</p>
<p>The last integral term causes no harm, since $|\sin 2t| \leq |2t|$ and hence $ \left| \int_{1/n}^{1} \frac{\sin 2t}{2t} \, dt \right| \leq 1$. Thus by the squeezing theorem the claim follows.</p>
|
Proof about continuity of a function involving the Banach fixed point theorem | 2,105,528 | 5 | https://math.stackexchange.com/questions/2105528/proof-about-continuity-of-a-function-involving-the-banach-fixed-point-theorem | <blockquote>
<p>Be $X$ and $\Lambda$ metric spaces, with $X$ complete, and $f\in C(X\times\Lambda,X)$. Suppose that exists some $\alpha\in[0,1)$ and, for each $\lambda\in\Lambda$, some $q(\lambda)\in[0,\alpha]$ such that
$$d(f(x,\lambda),f(y,\lambda))\le q(\lambda) d(x,y),\quad\forall x,y\in X$$
By the Banach fixed point theorem, for each $\lambda\in\Lambda$, $f(\cdot,\lambda)$ has a unique fixed point $x(\lambda)$. Prove that $[\lambda\mapsto x(\lambda)]\in C(\Lambda,X)$.</p>
</blockquote>
<p>Im totally stuck with this exercise, I dont have a clue about what to do... I tried to show the continuity of $h$ defined as</p>
<p>$$h:\Lambda\to X,\quad \lambda\mapsto x(\lambda)$$</p>
<p>trough the $\epsilon-\delta$ definition of continuity of $f$ and the information of the problem but I cant do it. Geometrically is easy to see it veracity because the function $h$ is just the intersection of $f$ with the plane defined by the set $\{\langle x,y\rangle\in X\times X:x=y\}\times\lambda$.</p>
<p>My work: from the continuity of $f$ we have that for any fixed point $x:=f(x,\lambda_x)$ for any $\epsilon>0$ exists a $\delta>0$ such that</p>
<p>$$d(\langle y,\lambda\rangle,\langle x,\lambda_x\rangle)<\delta\implies d(f(y,\lambda),x)<\epsilon$$</p>
<p>If we set $\langle y,\lambda\rangle=\langle x_0,\lambda_x\rangle$ then for the sequence defined as</p>
<p>$$x_n:=f(x_{n-1},\lambda_x)$$</p>
<p>that converges to the fixed point $x$, we can rewrite the above as</p>
<p>$$d(\langle x_0,\lambda_x\rangle,\langle x,\lambda_x\rangle)<\delta\implies d(x_1,x)<\epsilon\tag{1}$$</p>
<p>and from the contraction of $g_\lambda:=f(x,\lambda)$ for fixed $\lambda\in\Lambda$ we knows that</p>
<p>$$d(f(x_0,\lambda_x),x)=d(x_1,x)\le q(\lambda_x) d(x_0,x)\tag{2}$$</p>
<p>for a fixed point $x$ (with any $0\le q(\lambda)<1$, hence the contraction). Moreover: we can suppose that the metric in $X\times\Lambda$ is the standard product metric, then:</p>
<p>$$d(\langle a,b\rangle,\langle c,d\rangle)=\max\{d(a,c),d(b,d)\}\tag{3}$$</p>
<p>Then applying $(3)$ in $(1)$ we get</p>
<p>$$d(x_0,x)<\delta\implies d(x_1,x)<\epsilon$$</p>
<p>But as I said Im stuck, I dont know how to show the desired continuity of $h$. Probably the last two (or three) identities are useless, I just take them to see if I can get something from there. Some help will be appreciated, thank you.</p>
| [
{
"answer_id": 2105720,
"score": 1,
"text": "<p>If $h$ were not continuous at some $\\lambda \\in \\Lambda$, then it would be true</p>\n\n<p>\\begin{align}\\tag{1}\\label{eq1}\\exists \\epsilon_\\lambda>0,\\,\\forall \\delta >0,\\, \\exists \\lambda_{\\delta}\\in\\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\lambda_{\\delta})<\\delta,\\, d_X(x_{\\lambda},x_{\\lambda_{\\delta}})\\geq \\epsilon_\\lambda\\end{align}</p>\n\n<p>In other words, we can find a sequence $\\lambda_n \\to \\lambda$ such that $x_{\\lambda_n}$ is always at least $\\epsilon_\\lambda$-away from $x_{\\lambda}$.</p>\n\n<p>Now, since $f$ is continuous at $(x_\\lambda,\\lambda)$, it is true that:</p>\n\n<p>$$\\forall \\epsilon >0,\\, \\exists \\delta > 0,\\, \\forall y\\in X,\\eta \\in \\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\eta) <\\delta\\text{ and }d_X(x_\\lambda,y)<\\delta,\\,\\\\d_X(f_\\lambda(x_\\lambda),f_\\eta(y))=d_X(x_\\lambda,f_\\eta(y))< \\epsilon$$</p>\n\n<p>Whatever $\\delta$ is in the statement above, we may take $y=x_\\lambda$ and $\\eta=\\lambda_n$ for some large enough $n$.\nMore precisely, there is a subsequence $\\lambda_{n_k}$ of $\\lambda_n$ with $d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))\\to 0$ as $k \\to \\infty$.</p>\n\n<p>Now, $$d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+d_X(f_{\\lambda_{n_k}}(x_{\\lambda}),\\underbrace{f_{\\lambda_{n_k}}(x_{\\lambda_{n_k}})}_{x_{\\lambda_{n_k}}})\\\\\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+\\alpha\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})$$</p>\n\n<p>So that</p>\n\n<p>$$(1-\\alpha)\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))$$</p>\n\n<p>If $\\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\\alpha)\\cdot\\epsilon_\\lambda > 0$, but the RHS goes to $0$ as $k \\to \\infty$, a contradiction. It follows that $\\eqref{eq1}$ may not hold, and $h$ is continuous.</p>\n"
},
{
"answer_id": 2105752,
"score": 0,
"text": "<p>I am going to prove that $\\lambda\\mapsto x(\\lambda)$ is continuous at some $\\lambda\\in \\Lambda$. For this, let $\\epsilon>0$ be arbitrary and let $\\delta>0$ be such that $d(f(x(\\lambda),\\lambda),f(x(\\lambda),\\eta))\\leq \\epsilon$ holds for any $\\eta\\in\\Lambda$ with $d(\\eta,\\lambda)\\leq \\delta$. Fix such a $\\eta\\in\\Lambda$.</p>\n\n<p>I will use the following fact (see * below for a proof)</p>\n\n<blockquote>\n <p>By the proof of Banach's fixed-point theorem for the mapping\n $f(\\cdot,\\eta)$ we have $$ d(x(\\eta),x_0)\\leq\n \\frac{1}{1-q(\\eta)}d(f(x_0,\\eta),x_0) $$ for any $x_0\\in X$ </p>\n</blockquote>\n\n<p>Applying this fact with $x_0:=x(\\lambda)$ we get \n$$\nd(x(\\eta),x(\\lambda))\\leq \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),x(\\lambda))\\\\\n= \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),f(x(\\lambda),\\lambda))\\\\\n\\leq \\frac{1}{1-\\alpha}\\epsilon.\n$$\n This proves the claim.</p>\n\n<p><strong>Summary in words:</strong> For any $\\lambda\\in\\Lambda$ the fixed point $x(\\lambda)$ can be found by just applying $f_{\\lambda}(\\cdot):=f(\\cdot,\\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\\eta\\in\\Lambda$ in the fixed point of $\\lambda\\in\\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\\eta$ is close to that of $\\lambda$</p>\n\n<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>\n\n<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\\frac{1}{1-q(\\eta)}$ comes from summing up the geometric series in $d(x(\\eta),x_0)\\leq \\sum_{n=0}^{\\infty}d(x_{n+1},x_{n})\\leq \\sum_{n=0}^{\\infty}q^nd(x_{1},x_0)$</p>\n"
},
{
"answer_id": 2105720,
"score": 1,
"text": "<p>If $h$ were not continuous at some $\\lambda \\in \\Lambda$, then it would be true</p>\n\n<p>\\begin{align}\\tag{1}\\label{eq1}\\exists \\epsilon_\\lambda>0,\\,\\forall \\delta >0,\\, \\exists \\lambda_{\\delta}\\in\\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\lambda_{\\delta})<\\delta,\\, d_X(x_{\\lambda},x_{\\lambda_{\\delta}})\\geq \\epsilon_\\lambda\\end{align}</p>\n\n<p>In other words, we can find a sequence $\\lambda_n \\to \\lambda$ such that $x_{\\lambda_n}$ is always at least $\\epsilon_\\lambda$-away from $x_{\\lambda}$.</p>\n\n<p>Now, since $f$ is continuous at $(x_\\lambda,\\lambda)$, it is true that:</p>\n\n<p>$$\\forall \\epsilon >0,\\, \\exists \\delta > 0,\\, \\forall y\\in X,\\eta \\in \\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\eta) <\\delta\\text{ and }d_X(x_\\lambda,y)<\\delta,\\,\\\\d_X(f_\\lambda(x_\\lambda),f_\\eta(y))=d_X(x_\\lambda,f_\\eta(y))< \\epsilon$$</p>\n\n<p>Whatever $\\delta$ is in the statement above, we may take $y=x_\\lambda$ and $\\eta=\\lambda_n$ for some large enough $n$.\nMore precisely, there is a subsequence $\\lambda_{n_k}$ of $\\lambda_n$ with $d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))\\to 0$ as $k \\to \\infty$.</p>\n\n<p>Now, $$d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+d_X(f_{\\lambda_{n_k}}(x_{\\lambda}),\\underbrace{f_{\\lambda_{n_k}}(x_{\\lambda_{n_k}})}_{x_{\\lambda_{n_k}}})\\\\\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+\\alpha\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})$$</p>\n\n<p>So that</p>\n\n<p>$$(1-\\alpha)\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))$$</p>\n\n<p>If $\\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\\alpha)\\cdot\\epsilon_\\lambda > 0$, but the RHS goes to $0$ as $k \\to \\infty$, a contradiction. It follows that $\\eqref{eq1}$ may not hold, and $h$ is continuous.</p>\n"
},
{
"answer_id": 2105752,
"score": 0,
"text": "<p>I am going to prove that $\\lambda\\mapsto x(\\lambda)$ is continuous at some $\\lambda\\in \\Lambda$. For this, let $\\epsilon>0$ be arbitrary and let $\\delta>0$ be such that $d(f(x(\\lambda),\\lambda),f(x(\\lambda),\\eta))\\leq \\epsilon$ holds for any $\\eta\\in\\Lambda$ with $d(\\eta,\\lambda)\\leq \\delta$. Fix such a $\\eta\\in\\Lambda$.</p>\n\n<p>I will use the following fact (see * below for a proof)</p>\n\n<blockquote>\n <p>By the proof of Banach's fixed-point theorem for the mapping\n $f(\\cdot,\\eta)$ we have $$ d(x(\\eta),x_0)\\leq\n \\frac{1}{1-q(\\eta)}d(f(x_0,\\eta),x_0) $$ for any $x_0\\in X$ </p>\n</blockquote>\n\n<p>Applying this fact with $x_0:=x(\\lambda)$ we get \n$$\nd(x(\\eta),x(\\lambda))\\leq \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),x(\\lambda))\\\\\n= \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),f(x(\\lambda),\\lambda))\\\\\n\\leq \\frac{1}{1-\\alpha}\\epsilon.\n$$\n This proves the claim.</p>\n\n<p><strong>Summary in words:</strong> For any $\\lambda\\in\\Lambda$ the fixed point $x(\\lambda)$ can be found by just applying $f_{\\lambda}(\\cdot):=f(\\cdot,\\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\\eta\\in\\Lambda$ in the fixed point of $\\lambda\\in\\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\\eta$ is close to that of $\\lambda$</p>\n\n<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>\n\n<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\\frac{1}{1-q(\\eta)}$ comes from summing up the geometric series in $d(x(\\eta),x_0)\\leq \\sum_{n=0}^{\\infty}d(x_{n+1},x_{n})\\leq \\sum_{n=0}^{\\infty}q^nd(x_{1},x_0)$</p>\n"
},
{
"answer_id": 2105720,
"score": 1,
"text": "<p>If $h$ were not continuous at some $\\lambda \\in \\Lambda$, then it would be true</p>\n\n<p>\\begin{align}\\tag{1}\\label{eq1}\\exists \\epsilon_\\lambda>0,\\,\\forall \\delta >0,\\, \\exists \\lambda_{\\delta}\\in\\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\lambda_{\\delta})<\\delta,\\, d_X(x_{\\lambda},x_{\\lambda_{\\delta}})\\geq \\epsilon_\\lambda\\end{align}</p>\n\n<p>In other words, we can find a sequence $\\lambda_n \\to \\lambda$ such that $x_{\\lambda_n}$ is always at least $\\epsilon_\\lambda$-away from $x_{\\lambda}$.</p>\n\n<p>Now, since $f$ is continuous at $(x_\\lambda,\\lambda)$, it is true that:</p>\n\n<p>$$\\forall \\epsilon >0,\\, \\exists \\delta > 0,\\, \\forall y\\in X,\\eta \\in \\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\eta) <\\delta\\text{ and }d_X(x_\\lambda,y)<\\delta,\\,\\\\d_X(f_\\lambda(x_\\lambda),f_\\eta(y))=d_X(x_\\lambda,f_\\eta(y))< \\epsilon$$</p>\n\n<p>Whatever $\\delta$ is in the statement above, we may take $y=x_\\lambda$ and $\\eta=\\lambda_n$ for some large enough $n$.\nMore precisely, there is a subsequence $\\lambda_{n_k}$ of $\\lambda_n$ with $d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))\\to 0$ as $k \\to \\infty$.</p>\n\n<p>Now, $$d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+d_X(f_{\\lambda_{n_k}}(x_{\\lambda}),\\underbrace{f_{\\lambda_{n_k}}(x_{\\lambda_{n_k}})}_{x_{\\lambda_{n_k}}})\\\\\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+\\alpha\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})$$</p>\n\n<p>So that</p>\n\n<p>$$(1-\\alpha)\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))$$</p>\n\n<p>If $\\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\\alpha)\\cdot\\epsilon_\\lambda > 0$, but the RHS goes to $0$ as $k \\to \\infty$, a contradiction. It follows that $\\eqref{eq1}$ may not hold, and $h$ is continuous.</p>\n"
},
{
"answer_id": 2105752,
"score": 0,
"text": "<p>I am going to prove that $\\lambda\\mapsto x(\\lambda)$ is continuous at some $\\lambda\\in \\Lambda$. For this, let $\\epsilon>0$ be arbitrary and let $\\delta>0$ be such that $d(f(x(\\lambda),\\lambda),f(x(\\lambda),\\eta))\\leq \\epsilon$ holds for any $\\eta\\in\\Lambda$ with $d(\\eta,\\lambda)\\leq \\delta$. Fix such a $\\eta\\in\\Lambda$.</p>\n\n<p>I will use the following fact (see * below for a proof)</p>\n\n<blockquote>\n <p>By the proof of Banach's fixed-point theorem for the mapping\n $f(\\cdot,\\eta)$ we have $$ d(x(\\eta),x_0)\\leq\n \\frac{1}{1-q(\\eta)}d(f(x_0,\\eta),x_0) $$ for any $x_0\\in X$ </p>\n</blockquote>\n\n<p>Applying this fact with $x_0:=x(\\lambda)$ we get \n$$\nd(x(\\eta),x(\\lambda))\\leq \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),x(\\lambda))\\\\\n= \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),f(x(\\lambda),\\lambda))\\\\\n\\leq \\frac{1}{1-\\alpha}\\epsilon.\n$$\n This proves the claim.</p>\n\n<p><strong>Summary in words:</strong> For any $\\lambda\\in\\Lambda$ the fixed point $x(\\lambda)$ can be found by just applying $f_{\\lambda}(\\cdot):=f(\\cdot,\\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\\eta\\in\\Lambda$ in the fixed point of $\\lambda\\in\\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\\eta$ is close to that of $\\lambda$</p>\n\n<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>\n\n<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\\frac{1}{1-q(\\eta)}$ comes from summing up the geometric series in $d(x(\\eta),x_0)\\leq \\sum_{n=0}^{\\infty}d(x_{n+1},x_{n})\\leq \\sum_{n=0}^{\\infty}q^nd(x_{1},x_0)$</p>\n"
},
{
"answer_id": 2105720,
"score": 1,
"text": "<p>If $h$ were not continuous at some $\\lambda \\in \\Lambda$, then it would be true</p>\n\n<p>\\begin{align}\\tag{1}\\label{eq1}\\exists \\epsilon_\\lambda>0,\\,\\forall \\delta >0,\\, \\exists \\lambda_{\\delta}\\in\\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\lambda_{\\delta})<\\delta,\\, d_X(x_{\\lambda},x_{\\lambda_{\\delta}})\\geq \\epsilon_\\lambda\\end{align}</p>\n\n<p>In other words, we can find a sequence $\\lambda_n \\to \\lambda$ such that $x_{\\lambda_n}$ is always at least $\\epsilon_\\lambda$-away from $x_{\\lambda}$.</p>\n\n<p>Now, since $f$ is continuous at $(x_\\lambda,\\lambda)$, it is true that:</p>\n\n<p>$$\\forall \\epsilon >0,\\, \\exists \\delta > 0,\\, \\forall y\\in X,\\eta \\in \\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\eta) <\\delta\\text{ and }d_X(x_\\lambda,y)<\\delta,\\,\\\\d_X(f_\\lambda(x_\\lambda),f_\\eta(y))=d_X(x_\\lambda,f_\\eta(y))< \\epsilon$$</p>\n\n<p>Whatever $\\delta$ is in the statement above, we may take $y=x_\\lambda$ and $\\eta=\\lambda_n$ for some large enough $n$.\nMore precisely, there is a subsequence $\\lambda_{n_k}$ of $\\lambda_n$ with $d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))\\to 0$ as $k \\to \\infty$.</p>\n\n<p>Now, $$d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+d_X(f_{\\lambda_{n_k}}(x_{\\lambda}),\\underbrace{f_{\\lambda_{n_k}}(x_{\\lambda_{n_k}})}_{x_{\\lambda_{n_k}}})\\\\\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+\\alpha\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})$$</p>\n\n<p>So that</p>\n\n<p>$$(1-\\alpha)\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))$$</p>\n\n<p>If $\\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\\alpha)\\cdot\\epsilon_\\lambda > 0$, but the RHS goes to $0$ as $k \\to \\infty$, a contradiction. It follows that $\\eqref{eq1}$ may not hold, and $h$ is continuous.</p>\n"
},
{
"answer_id": 2105752,
"score": 0,
"text": "<p>I am going to prove that $\\lambda\\mapsto x(\\lambda)$ is continuous at some $\\lambda\\in \\Lambda$. For this, let $\\epsilon>0$ be arbitrary and let $\\delta>0$ be such that $d(f(x(\\lambda),\\lambda),f(x(\\lambda),\\eta))\\leq \\epsilon$ holds for any $\\eta\\in\\Lambda$ with $d(\\eta,\\lambda)\\leq \\delta$. Fix such a $\\eta\\in\\Lambda$.</p>\n\n<p>I will use the following fact (see * below for a proof)</p>\n\n<blockquote>\n <p>By the proof of Banach's fixed-point theorem for the mapping\n $f(\\cdot,\\eta)$ we have $$ d(x(\\eta),x_0)\\leq\n \\frac{1}{1-q(\\eta)}d(f(x_0,\\eta),x_0) $$ for any $x_0\\in X$ </p>\n</blockquote>\n\n<p>Applying this fact with $x_0:=x(\\lambda)$ we get \n$$\nd(x(\\eta),x(\\lambda))\\leq \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),x(\\lambda))\\\\\n= \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),f(x(\\lambda),\\lambda))\\\\\n\\leq \\frac{1}{1-\\alpha}\\epsilon.\n$$\n This proves the claim.</p>\n\n<p><strong>Summary in words:</strong> For any $\\lambda\\in\\Lambda$ the fixed point $x(\\lambda)$ can be found by just applying $f_{\\lambda}(\\cdot):=f(\\cdot,\\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\\eta\\in\\Lambda$ in the fixed point of $\\lambda\\in\\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\\eta$ is close to that of $\\lambda$</p>\n\n<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>\n\n<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\\frac{1}{1-q(\\eta)}$ comes from summing up the geometric series in $d(x(\\eta),x_0)\\leq \\sum_{n=0}^{\\infty}d(x_{n+1},x_{n})\\leq \\sum_{n=0}^{\\infty}q^nd(x_{1},x_0)$</p>\n"
},
{
"answer_id": 2105720,
"score": 1,
"text": "<p>If $h$ were not continuous at some $\\lambda \\in \\Lambda$, then it would be true</p>\n\n<p>\\begin{align}\\tag{1}\\label{eq1}\\exists \\epsilon_\\lambda>0,\\,\\forall \\delta >0,\\, \\exists \\lambda_{\\delta}\\in\\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\lambda_{\\delta})<\\delta,\\, d_X(x_{\\lambda},x_{\\lambda_{\\delta}})\\geq \\epsilon_\\lambda\\end{align}</p>\n\n<p>In other words, we can find a sequence $\\lambda_n \\to \\lambda$ such that $x_{\\lambda_n}$ is always at least $\\epsilon_\\lambda$-away from $x_{\\lambda}$.</p>\n\n<p>Now, since $f$ is continuous at $(x_\\lambda,\\lambda)$, it is true that:</p>\n\n<p>$$\\forall \\epsilon >0,\\, \\exists \\delta > 0,\\, \\forall y\\in X,\\eta \\in \\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\eta) <\\delta\\text{ and }d_X(x_\\lambda,y)<\\delta,\\,\\\\d_X(f_\\lambda(x_\\lambda),f_\\eta(y))=d_X(x_\\lambda,f_\\eta(y))< \\epsilon$$</p>\n\n<p>Whatever $\\delta$ is in the statement above, we may take $y=x_\\lambda$ and $\\eta=\\lambda_n$ for some large enough $n$.\nMore precisely, there is a subsequence $\\lambda_{n_k}$ of $\\lambda_n$ with $d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))\\to 0$ as $k \\to \\infty$.</p>\n\n<p>Now, $$d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+d_X(f_{\\lambda_{n_k}}(x_{\\lambda}),\\underbrace{f_{\\lambda_{n_k}}(x_{\\lambda_{n_k}})}_{x_{\\lambda_{n_k}}})\\\\\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+\\alpha\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})$$</p>\n\n<p>So that</p>\n\n<p>$$(1-\\alpha)\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))$$</p>\n\n<p>If $\\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\\alpha)\\cdot\\epsilon_\\lambda > 0$, but the RHS goes to $0$ as $k \\to \\infty$, a contradiction. It follows that $\\eqref{eq1}$ may not hold, and $h$ is continuous.</p>\n"
},
{
"answer_id": 2105752,
"score": 0,
"text": "<p>I am going to prove that $\\lambda\\mapsto x(\\lambda)$ is continuous at some $\\lambda\\in \\Lambda$. For this, let $\\epsilon>0$ be arbitrary and let $\\delta>0$ be such that $d(f(x(\\lambda),\\lambda),f(x(\\lambda),\\eta))\\leq \\epsilon$ holds for any $\\eta\\in\\Lambda$ with $d(\\eta,\\lambda)\\leq \\delta$. Fix such a $\\eta\\in\\Lambda$.</p>\n\n<p>I will use the following fact (see * below for a proof)</p>\n\n<blockquote>\n <p>By the proof of Banach's fixed-point theorem for the mapping\n $f(\\cdot,\\eta)$ we have $$ d(x(\\eta),x_0)\\leq\n \\frac{1}{1-q(\\eta)}d(f(x_0,\\eta),x_0) $$ for any $x_0\\in X$ </p>\n</blockquote>\n\n<p>Applying this fact with $x_0:=x(\\lambda)$ we get \n$$\nd(x(\\eta),x(\\lambda))\\leq \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),x(\\lambda))\\\\\n= \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),f(x(\\lambda),\\lambda))\\\\\n\\leq \\frac{1}{1-\\alpha}\\epsilon.\n$$\n This proves the claim.</p>\n\n<p><strong>Summary in words:</strong> For any $\\lambda\\in\\Lambda$ the fixed point $x(\\lambda)$ can be found by just applying $f_{\\lambda}(\\cdot):=f(\\cdot,\\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\\eta\\in\\Lambda$ in the fixed point of $\\lambda\\in\\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\\eta$ is close to that of $\\lambda$</p>\n\n<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>\n\n<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\\frac{1}{1-q(\\eta)}$ comes from summing up the geometric series in $d(x(\\eta),x_0)\\leq \\sum_{n=0}^{\\infty}d(x_{n+1},x_{n})\\leq \\sum_{n=0}^{\\infty}q^nd(x_{1},x_0)$</p>\n"
},
{
"answer_id": 2105720,
"score": 1,
"text": "<p>If $h$ were not continuous at some $\\lambda \\in \\Lambda$, then it would be true</p>\n\n<p>\\begin{align}\\tag{1}\\label{eq1}\\exists \\epsilon_\\lambda>0,\\,\\forall \\delta >0,\\, \\exists \\lambda_{\\delta}\\in\\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\lambda_{\\delta})<\\delta,\\, d_X(x_{\\lambda},x_{\\lambda_{\\delta}})\\geq \\epsilon_\\lambda\\end{align}</p>\n\n<p>In other words, we can find a sequence $\\lambda_n \\to \\lambda$ such that $x_{\\lambda_n}$ is always at least $\\epsilon_\\lambda$-away from $x_{\\lambda}$.</p>\n\n<p>Now, since $f$ is continuous at $(x_\\lambda,\\lambda)$, it is true that:</p>\n\n<p>$$\\forall \\epsilon >0,\\, \\exists \\delta > 0,\\, \\forall y\\in X,\\eta \\in \\Lambda \\text{ with } d_{\\Lambda}(\\lambda,\\eta) <\\delta\\text{ and }d_X(x_\\lambda,y)<\\delta,\\,\\\\d_X(f_\\lambda(x_\\lambda),f_\\eta(y))=d_X(x_\\lambda,f_\\eta(y))< \\epsilon$$</p>\n\n<p>Whatever $\\delta$ is in the statement above, we may take $y=x_\\lambda$ and $\\eta=\\lambda_n$ for some large enough $n$.\nMore precisely, there is a subsequence $\\lambda_{n_k}$ of $\\lambda_n$ with $d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))\\to 0$ as $k \\to \\infty$.</p>\n\n<p>Now, $$d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+d_X(f_{\\lambda_{n_k}}(x_{\\lambda}),\\underbrace{f_{\\lambda_{n_k}}(x_{\\lambda_{n_k}})}_{x_{\\lambda_{n_k}}})\\\\\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))+\\alpha\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})$$</p>\n\n<p>So that</p>\n\n<p>$$(1-\\alpha)\\cdot d_X(x_\\lambda,x_{\\lambda_{n_k}})\\leq d_X(x_\\lambda,f_{\\lambda_{n_k}}(x_{\\lambda}))$$</p>\n\n<p>If $\\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\\alpha)\\cdot\\epsilon_\\lambda > 0$, but the RHS goes to $0$ as $k \\to \\infty$, a contradiction. It follows that $\\eqref{eq1}$ may not hold, and $h$ is continuous.</p>\n"
},
{
"answer_id": 2105752,
"score": 0,
"text": "<p>I am going to prove that $\\lambda\\mapsto x(\\lambda)$ is continuous at some $\\lambda\\in \\Lambda$. For this, let $\\epsilon>0$ be arbitrary and let $\\delta>0$ be such that $d(f(x(\\lambda),\\lambda),f(x(\\lambda),\\eta))\\leq \\epsilon$ holds for any $\\eta\\in\\Lambda$ with $d(\\eta,\\lambda)\\leq \\delta$. Fix such a $\\eta\\in\\Lambda$.</p>\n\n<p>I will use the following fact (see * below for a proof)</p>\n\n<blockquote>\n <p>By the proof of Banach's fixed-point theorem for the mapping\n $f(\\cdot,\\eta)$ we have $$ d(x(\\eta),x_0)\\leq\n \\frac{1}{1-q(\\eta)}d(f(x_0,\\eta),x_0) $$ for any $x_0\\in X$ </p>\n</blockquote>\n\n<p>Applying this fact with $x_0:=x(\\lambda)$ we get \n$$\nd(x(\\eta),x(\\lambda))\\leq \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),x(\\lambda))\\\\\n= \\frac{1}{1-q(\\eta)}d(f(x(\\lambda),\\eta),f(x(\\lambda),\\lambda))\\\\\n\\leq \\frac{1}{1-\\alpha}\\epsilon.\n$$\n This proves the claim.</p>\n\n<p><strong>Summary in words:</strong> For any $\\lambda\\in\\Lambda$ the fixed point $x(\\lambda)$ can be found by just applying $f_{\\lambda}(\\cdot):=f(\\cdot,\\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\\eta\\in\\Lambda$ in the fixed point of $\\lambda\\in\\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\\eta$ is close to that of $\\lambda$</p>\n\n<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>\n\n<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\\frac{1}{1-q(\\eta)}$ comes from summing up the geometric series in $d(x(\\eta),x_0)\\leq \\sum_{n=0}^{\\infty}d(x_{n+1},x_{n})\\leq \\sum_{n=0}^{\\infty}q^nd(x_{1},x_0)$</p>\n"
}
] | Question: Proof about continuity of a function involving the Banach fixed point theorem
Answer 1: <p>If $h$ were not continuous at some $\lambda \in \Lambda$, then it would be true</p>
<p>\begin{align}\tag{1}\label{eq1}\exists \epsilon_\lambda>0,\,\forall \delta >0,\, \exists \lambda_{\delta}\in\Lambda \text{ with } d_{\Lambda}(\lambda,\lambda_{\delta})<\delta,\, d_X(x_{\lambda},x_{\lambda_{\delta}})\geq \epsilon_\lambda\end{align}</p>
<p>In other words, we can find a sequence $\lambda_n \to \lambda$ such that $x_{\lambda_n}$ is always at least $\epsilon_\lambda$-away from $x_{\lambda}$.</p>
<p>Now, since $f$ is continuous at $(x_\lambda,\lambda)$, it is true that:</p>
<p>$$\forall \epsilon >0,\, \exists \delta > 0,\, \forall y\in X,\eta \in \Lambda \text{ with } d_{\Lambda}(\lambda,\eta) <\delta\text{ and }d_X(x_\lambda,y)<\delta,\,\\d_X(f_\lambda(x_\lambda),f_\eta(y))=d_X(x_\lambda,f_\eta(y))< \epsilon$$</p>
<p>Whatever $\delta$ is in the statement above, we may take $y=x_\lambda$ and $\eta=\lambda_n$ for some large enough $n$.
More precisely, there is a subsequence $\lambda_{n_k}$ of $\lambda_n$ with $d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))\to 0$ as $k \to \infty$.</p>
<p>Now, $$d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+d_X(f_{\lambda_{n_k}}(x_{\lambda}),\underbrace{f_{\lambda_{n_k}}(x_{\lambda_{n_k}})}_{x_{\lambda_{n_k}}})\\\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+\alpha\cdot d_X(x_\lambda,x_{\lambda_{n_k}})$$</p>
<p>So that</p>
<p>$$(1-\alpha)\cdot d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))$$</p>
<p>If $\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\alpha)\cdot\epsilon_\lambda > 0$, but the RHS goes to $0$ as $k \to \infty$, a contradiction. It follows that $\eqref{eq1}$ may not hold, and $h$ is continuous.</p>
Answer 2: <p>I am going to prove that $\lambda\mapsto x(\lambda)$ is continuous at some $\lambda\in \Lambda$. For this, let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $d(f(x(\lambda),\lambda),f(x(\lambda),\eta))\leq \epsilon$ holds for any $\eta\in\Lambda$ with $d(\eta,\lambda)\leq \delta$. Fix such a $\eta\in\Lambda$.</p>
<p>I will use the following fact (see * below for a proof)</p>
<blockquote>
<p>By the proof of Banach's fixed-point theorem for the mapping
$f(\cdot,\eta)$ we have $$ d(x(\eta),x_0)\leq
\frac{1}{1-q(\eta)}d(f(x_0,\eta),x_0) $$ for any $x_0\in X$ </p>
</blockquote>
<p>Applying this fact with $x_0:=x(\lambda)$ we get
$$
d(x(\eta),x(\lambda))\leq \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),x(\lambda))\\
= \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),f(x(\lambda),\lambda))\\
\leq \frac{1}{1-\alpha}\epsilon.
$$
This proves the claim.</p>
<p><strong>Summary in words:</strong> For any $\lambda\in\Lambda$ the fixed point $x(\lambda)$ can be found by just applying $f_{\lambda}(\cdot):=f(\cdot,\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\eta\in\Lambda$ in the fixed point of $\lambda\in\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\eta$ is close to that of $\lambda$</p>
<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>
<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\frac{1}{1-q(\eta)}$ comes from summing up the geometric series in $d(x(\eta),x_0)\leq \sum_{n=0}^{\infty}d(x_{n+1},x_{n})\leq \sum_{n=0}^{\infty}q^nd(x_{1},x_0)$</p>
Answer 3: <p>If $h$ were not continuous at some $\lambda \in \Lambda$, then it would be true</p>
<p>\begin{align}\tag{1}\label{eq1}\exists \epsilon_\lambda>0,\,\forall \delta >0,\, \exists \lambda_{\delta}\in\Lambda \text{ with } d_{\Lambda}(\lambda,\lambda_{\delta})<\delta,\, d_X(x_{\lambda},x_{\lambda_{\delta}})\geq \epsilon_\lambda\end{align}</p>
<p>In other words, we can find a sequence $\lambda_n \to \lambda$ such that $x_{\lambda_n}$ is always at least $\epsilon_\lambda$-away from $x_{\lambda}$.</p>
<p>Now, since $f$ is continuous at $(x_\lambda,\lambda)$, it is true that:</p>
<p>$$\forall \epsilon >0,\, \exists \delta > 0,\, \forall y\in X,\eta \in \Lambda \text{ with } d_{\Lambda}(\lambda,\eta) <\delta\text{ and }d_X(x_\lambda,y)<\delta,\,\\d_X(f_\lambda(x_\lambda),f_\eta(y))=d_X(x_\lambda,f_\eta(y))< \epsilon$$</p>
<p>Whatever $\delta$ is in the statement above, we may take $y=x_\lambda$ and $\eta=\lambda_n$ for some large enough $n$.
More precisely, there is a subsequence $\lambda_{n_k}$ of $\lambda_n$ with $d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))\to 0$ as $k \to \infty$.</p>
<p>Now, $$d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+d_X(f_{\lambda_{n_k}}(x_{\lambda}),\underbrace{f_{\lambda_{n_k}}(x_{\lambda_{n_k}})}_{x_{\lambda_{n_k}}})\\\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+\alpha\cdot d_X(x_\lambda,x_{\lambda_{n_k}})$$</p>
<p>So that</p>
<p>$$(1-\alpha)\cdot d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))$$</p>
<p>If $\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\alpha)\cdot\epsilon_\lambda > 0$, but the RHS goes to $0$ as $k \to \infty$, a contradiction. It follows that $\eqref{eq1}$ may not hold, and $h$ is continuous.</p>
Answer 4: <p>I am going to prove that $\lambda\mapsto x(\lambda)$ is continuous at some $\lambda\in \Lambda$. For this, let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $d(f(x(\lambda),\lambda),f(x(\lambda),\eta))\leq \epsilon$ holds for any $\eta\in\Lambda$ with $d(\eta,\lambda)\leq \delta$. Fix such a $\eta\in\Lambda$.</p>
<p>I will use the following fact (see * below for a proof)</p>
<blockquote>
<p>By the proof of Banach's fixed-point theorem for the mapping
$f(\cdot,\eta)$ we have $$ d(x(\eta),x_0)\leq
\frac{1}{1-q(\eta)}d(f(x_0,\eta),x_0) $$ for any $x_0\in X$ </p>
</blockquote>
<p>Applying this fact with $x_0:=x(\lambda)$ we get
$$
d(x(\eta),x(\lambda))\leq \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),x(\lambda))\\
= \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),f(x(\lambda),\lambda))\\
\leq \frac{1}{1-\alpha}\epsilon.
$$
This proves the claim.</p>
<p><strong>Summary in words:</strong> For any $\lambda\in\Lambda$ the fixed point $x(\lambda)$ can be found by just applying $f_{\lambda}(\cdot):=f(\cdot,\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\eta\in\Lambda$ in the fixed point of $\lambda\in\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\eta$ is close to that of $\lambda$</p>
<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>
<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\frac{1}{1-q(\eta)}$ comes from summing up the geometric series in $d(x(\eta),x_0)\leq \sum_{n=0}^{\infty}d(x_{n+1},x_{n})\leq \sum_{n=0}^{\infty}q^nd(x_{1},x_0)$</p>
Answer 5: <p>If $h$ were not continuous at some $\lambda \in \Lambda$, then it would be true</p>
<p>\begin{align}\tag{1}\label{eq1}\exists \epsilon_\lambda>0,\,\forall \delta >0,\, \exists \lambda_{\delta}\in\Lambda \text{ with } d_{\Lambda}(\lambda,\lambda_{\delta})<\delta,\, d_X(x_{\lambda},x_{\lambda_{\delta}})\geq \epsilon_\lambda\end{align}</p>
<p>In other words, we can find a sequence $\lambda_n \to \lambda$ such that $x_{\lambda_n}$ is always at least $\epsilon_\lambda$-away from $x_{\lambda}$.</p>
<p>Now, since $f$ is continuous at $(x_\lambda,\lambda)$, it is true that:</p>
<p>$$\forall \epsilon >0,\, \exists \delta > 0,\, \forall y\in X,\eta \in \Lambda \text{ with } d_{\Lambda}(\lambda,\eta) <\delta\text{ and }d_X(x_\lambda,y)<\delta,\,\\d_X(f_\lambda(x_\lambda),f_\eta(y))=d_X(x_\lambda,f_\eta(y))< \epsilon$$</p>
<p>Whatever $\delta$ is in the statement above, we may take $y=x_\lambda$ and $\eta=\lambda_n$ for some large enough $n$.
More precisely, there is a subsequence $\lambda_{n_k}$ of $\lambda_n$ with $d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))\to 0$ as $k \to \infty$.</p>
<p>Now, $$d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+d_X(f_{\lambda_{n_k}}(x_{\lambda}),\underbrace{f_{\lambda_{n_k}}(x_{\lambda_{n_k}})}_{x_{\lambda_{n_k}}})\\\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+\alpha\cdot d_X(x_\lambda,x_{\lambda_{n_k}})$$</p>
<p>So that</p>
<p>$$(1-\alpha)\cdot d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))$$</p>
<p>If $\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\alpha)\cdot\epsilon_\lambda > 0$, but the RHS goes to $0$ as $k \to \infty$, a contradiction. It follows that $\eqref{eq1}$ may not hold, and $h$ is continuous.</p>
Answer 6: <p>I am going to prove that $\lambda\mapsto x(\lambda)$ is continuous at some $\lambda\in \Lambda$. For this, let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $d(f(x(\lambda),\lambda),f(x(\lambda),\eta))\leq \epsilon$ holds for any $\eta\in\Lambda$ with $d(\eta,\lambda)\leq \delta$. Fix such a $\eta\in\Lambda$.</p>
<p>I will use the following fact (see * below for a proof)</p>
<blockquote>
<p>By the proof of Banach's fixed-point theorem for the mapping
$f(\cdot,\eta)$ we have $$ d(x(\eta),x_0)\leq
\frac{1}{1-q(\eta)}d(f(x_0,\eta),x_0) $$ for any $x_0\in X$ </p>
</blockquote>
<p>Applying this fact with $x_0:=x(\lambda)$ we get
$$
d(x(\eta),x(\lambda))\leq \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),x(\lambda))\\
= \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),f(x(\lambda),\lambda))\\
\leq \frac{1}{1-\alpha}\epsilon.
$$
This proves the claim.</p>
<p><strong>Summary in words:</strong> For any $\lambda\in\Lambda$ the fixed point $x(\lambda)$ can be found by just applying $f_{\lambda}(\cdot):=f(\cdot,\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\eta\in\Lambda$ in the fixed point of $\lambda\in\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\eta$ is close to that of $\lambda$</p>
<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>
<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\frac{1}{1-q(\eta)}$ comes from summing up the geometric series in $d(x(\eta),x_0)\leq \sum_{n=0}^{\infty}d(x_{n+1},x_{n})\leq \sum_{n=0}^{\infty}q^nd(x_{1},x_0)$</p>
Answer 7: <p>If $h$ were not continuous at some $\lambda \in \Lambda$, then it would be true</p>
<p>\begin{align}\tag{1}\label{eq1}\exists \epsilon_\lambda>0,\,\forall \delta >0,\, \exists \lambda_{\delta}\in\Lambda \text{ with } d_{\Lambda}(\lambda,\lambda_{\delta})<\delta,\, d_X(x_{\lambda},x_{\lambda_{\delta}})\geq \epsilon_\lambda\end{align}</p>
<p>In other words, we can find a sequence $\lambda_n \to \lambda$ such that $x_{\lambda_n}$ is always at least $\epsilon_\lambda$-away from $x_{\lambda}$.</p>
<p>Now, since $f$ is continuous at $(x_\lambda,\lambda)$, it is true that:</p>
<p>$$\forall \epsilon >0,\, \exists \delta > 0,\, \forall y\in X,\eta \in \Lambda \text{ with } d_{\Lambda}(\lambda,\eta) <\delta\text{ and }d_X(x_\lambda,y)<\delta,\,\\d_X(f_\lambda(x_\lambda),f_\eta(y))=d_X(x_\lambda,f_\eta(y))< \epsilon$$</p>
<p>Whatever $\delta$ is in the statement above, we may take $y=x_\lambda$ and $\eta=\lambda_n$ for some large enough $n$.
More precisely, there is a subsequence $\lambda_{n_k}$ of $\lambda_n$ with $d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))\to 0$ as $k \to \infty$.</p>
<p>Now, $$d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+d_X(f_{\lambda_{n_k}}(x_{\lambda}),\underbrace{f_{\lambda_{n_k}}(x_{\lambda_{n_k}})}_{x_{\lambda_{n_k}}})\\\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+\alpha\cdot d_X(x_\lambda,x_{\lambda_{n_k}})$$</p>
<p>So that</p>
<p>$$(1-\alpha)\cdot d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))$$</p>
<p>If $\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\alpha)\cdot\epsilon_\lambda > 0$, but the RHS goes to $0$ as $k \to \infty$, a contradiction. It follows that $\eqref{eq1}$ may not hold, and $h$ is continuous.</p>
Answer 8: <p>I am going to prove that $\lambda\mapsto x(\lambda)$ is continuous at some $\lambda\in \Lambda$. For this, let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $d(f(x(\lambda),\lambda),f(x(\lambda),\eta))\leq \epsilon$ holds for any $\eta\in\Lambda$ with $d(\eta,\lambda)\leq \delta$. Fix such a $\eta\in\Lambda$.</p>
<p>I will use the following fact (see * below for a proof)</p>
<blockquote>
<p>By the proof of Banach's fixed-point theorem for the mapping
$f(\cdot,\eta)$ we have $$ d(x(\eta),x_0)\leq
\frac{1}{1-q(\eta)}d(f(x_0,\eta),x_0) $$ for any $x_0\in X$ </p>
</blockquote>
<p>Applying this fact with $x_0:=x(\lambda)$ we get
$$
d(x(\eta),x(\lambda))\leq \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),x(\lambda))\\
= \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),f(x(\lambda),\lambda))\\
\leq \frac{1}{1-\alpha}\epsilon.
$$
This proves the claim.</p>
<p><strong>Summary in words:</strong> For any $\lambda\in\Lambda$ the fixed point $x(\lambda)$ can be found by just applying $f_{\lambda}(\cdot):=f(\cdot,\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\eta\in\Lambda$ in the fixed point of $\lambda\in\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\eta$ is close to that of $\lambda$</p>
<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>
<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\frac{1}{1-q(\eta)}$ comes from summing up the geometric series in $d(x(\eta),x_0)\leq \sum_{n=0}^{\infty}d(x_{n+1},x_{n})\leq \sum_{n=0}^{\infty}q^nd(x_{1},x_0)$</p>
Answer 9: <p>If $h$ were not continuous at some $\lambda \in \Lambda$, then it would be true</p>
<p>\begin{align}\tag{1}\label{eq1}\exists \epsilon_\lambda>0,\,\forall \delta >0,\, \exists \lambda_{\delta}\in\Lambda \text{ with } d_{\Lambda}(\lambda,\lambda_{\delta})<\delta,\, d_X(x_{\lambda},x_{\lambda_{\delta}})\geq \epsilon_\lambda\end{align}</p>
<p>In other words, we can find a sequence $\lambda_n \to \lambda$ such that $x_{\lambda_n}$ is always at least $\epsilon_\lambda$-away from $x_{\lambda}$.</p>
<p>Now, since $f$ is continuous at $(x_\lambda,\lambda)$, it is true that:</p>
<p>$$\forall \epsilon >0,\, \exists \delta > 0,\, \forall y\in X,\eta \in \Lambda \text{ with } d_{\Lambda}(\lambda,\eta) <\delta\text{ and }d_X(x_\lambda,y)<\delta,\,\\d_X(f_\lambda(x_\lambda),f_\eta(y))=d_X(x_\lambda,f_\eta(y))< \epsilon$$</p>
<p>Whatever $\delta$ is in the statement above, we may take $y=x_\lambda$ and $\eta=\lambda_n$ for some large enough $n$.
More precisely, there is a subsequence $\lambda_{n_k}$ of $\lambda_n$ with $d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))\to 0$ as $k \to \infty$.</p>
<p>Now, $$d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+d_X(f_{\lambda_{n_k}}(x_{\lambda}),\underbrace{f_{\lambda_{n_k}}(x_{\lambda_{n_k}})}_{x_{\lambda_{n_k}}})\\\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+\alpha\cdot d_X(x_\lambda,x_{\lambda_{n_k}})$$</p>
<p>So that</p>
<p>$$(1-\alpha)\cdot d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))$$</p>
<p>If $\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\alpha)\cdot\epsilon_\lambda > 0$, but the RHS goes to $0$ as $k \to \infty$, a contradiction. It follows that $\eqref{eq1}$ may not hold, and $h$ is continuous.</p>
Answer 10: <p>I am going to prove that $\lambda\mapsto x(\lambda)$ is continuous at some $\lambda\in \Lambda$. For this, let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $d(f(x(\lambda),\lambda),f(x(\lambda),\eta))\leq \epsilon$ holds for any $\eta\in\Lambda$ with $d(\eta,\lambda)\leq \delta$. Fix such a $\eta\in\Lambda$.</p>
<p>I will use the following fact (see * below for a proof)</p>
<blockquote>
<p>By the proof of Banach's fixed-point theorem for the mapping
$f(\cdot,\eta)$ we have $$ d(x(\eta),x_0)\leq
\frac{1}{1-q(\eta)}d(f(x_0,\eta),x_0) $$ for any $x_0\in X$ </p>
</blockquote>
<p>Applying this fact with $x_0:=x(\lambda)$ we get
$$
d(x(\eta),x(\lambda))\leq \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),x(\lambda))\\
= \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),f(x(\lambda),\lambda))\\
\leq \frac{1}{1-\alpha}\epsilon.
$$
This proves the claim.</p>
<p><strong>Summary in words:</strong> For any $\lambda\in\Lambda$ the fixed point $x(\lambda)$ can be found by just applying $f_{\lambda}(\cdot):=f(\cdot,\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\eta\in\Lambda$ in the fixed point of $\lambda\in\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\eta$ is close to that of $\lambda$</p>
<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>
<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\frac{1}{1-q(\eta)}$ comes from summing up the geometric series in $d(x(\eta),x_0)\leq \sum_{n=0}^{\infty}d(x_{n+1},x_{n})\leq \sum_{n=0}^{\infty}q^nd(x_{1},x_0)$</p>
Answer 11: <p>If $h$ were not continuous at some $\lambda \in \Lambda$, then it would be true</p>
<p>\begin{align}\tag{1}\label{eq1}\exists \epsilon_\lambda>0,\,\forall \delta >0,\, \exists \lambda_{\delta}\in\Lambda \text{ with } d_{\Lambda}(\lambda,\lambda_{\delta})<\delta,\, d_X(x_{\lambda},x_{\lambda_{\delta}})\geq \epsilon_\lambda\end{align}</p>
<p>In other words, we can find a sequence $\lambda_n \to \lambda$ such that $x_{\lambda_n}$ is always at least $\epsilon_\lambda$-away from $x_{\lambda}$.</p>
<p>Now, since $f$ is continuous at $(x_\lambda,\lambda)$, it is true that:</p>
<p>$$\forall \epsilon >0,\, \exists \delta > 0,\, \forall y\in X,\eta \in \Lambda \text{ with } d_{\Lambda}(\lambda,\eta) <\delta\text{ and }d_X(x_\lambda,y)<\delta,\,\\d_X(f_\lambda(x_\lambda),f_\eta(y))=d_X(x_\lambda,f_\eta(y))< \epsilon$$</p>
<p>Whatever $\delta$ is in the statement above, we may take $y=x_\lambda$ and $\eta=\lambda_n$ for some large enough $n$.
More precisely, there is a subsequence $\lambda_{n_k}$ of $\lambda_n$ with $d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))\to 0$ as $k \to \infty$.</p>
<p>Now, $$d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+d_X(f_{\lambda_{n_k}}(x_{\lambda}),\underbrace{f_{\lambda_{n_k}}(x_{\lambda_{n_k}})}_{x_{\lambda_{n_k}}})\\\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))+\alpha\cdot d_X(x_\lambda,x_{\lambda_{n_k}})$$</p>
<p>So that</p>
<p>$$(1-\alpha)\cdot d_X(x_\lambda,x_{\lambda_{n_k}})\leq d_X(x_\lambda,f_{\lambda_{n_k}}(x_{\lambda}))$$</p>
<p>If $\eqref{eq1}$ did hold, then the LHS above would be bounded from below by $(1-\alpha)\cdot\epsilon_\lambda > 0$, but the RHS goes to $0$ as $k \to \infty$, a contradiction. It follows that $\eqref{eq1}$ may not hold, and $h$ is continuous.</p>
Answer 12: <p>I am going to prove that $\lambda\mapsto x(\lambda)$ is continuous at some $\lambda\in \Lambda$. For this, let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $d(f(x(\lambda),\lambda),f(x(\lambda),\eta))\leq \epsilon$ holds for any $\eta\in\Lambda$ with $d(\eta,\lambda)\leq \delta$. Fix such a $\eta\in\Lambda$.</p>
<p>I will use the following fact (see * below for a proof)</p>
<blockquote>
<p>By the proof of Banach's fixed-point theorem for the mapping
$f(\cdot,\eta)$ we have $$ d(x(\eta),x_0)\leq
\frac{1}{1-q(\eta)}d(f(x_0,\eta),x_0) $$ for any $x_0\in X$ </p>
</blockquote>
<p>Applying this fact with $x_0:=x(\lambda)$ we get
$$
d(x(\eta),x(\lambda))\leq \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),x(\lambda))\\
= \frac{1}{1-q(\eta)}d(f(x(\lambda),\eta),f(x(\lambda),\lambda))\\
\leq \frac{1}{1-\alpha}\epsilon.
$$
This proves the claim.</p>
<p><strong>Summary in words:</strong> For any $\lambda\in\Lambda$ the fixed point $x(\lambda)$ can be found by just applying $f_{\lambda}(\cdot):=f(\cdot,\lambda)$ repeatedly, with an arbitrary initial point. The uniform bound on $q(\lambda)$ shows that the limit of this iteration is at most a constant factor further away from the initial point than the first iteration. If we now start the iteration of $\eta\in\Lambda$ in the fixed point of $\lambda\in\Lambda$, then the first iteration only induces a small step, by continuity of $f$ in the second argument. It follows that the fixed point of $\eta$ is close to that of $\lambda$</p>
<p><strong>Note:</strong> I only used continuity of $f$ in the second argument. Continuity in the first argument is needed for the existence of fixed points only. </p>
<p><strong>*</strong> The proof of Banach's fixed point theorem proceeds by repeatedly applying the contraction mapping to an arbitrary $x_0$. The term $f(x_0,\eta)$ in the right-hand side above is the first iteration, $x_1$, and the term $\frac{1}{1-q(\eta)}$ comes from summing up the geometric series in $d(x(\eta),x_0)\leq \sum_{n=0}^{\infty}d(x_{n+1},x_{n})\leq \sum_{n=0}^{\infty}q^nd(x_{1},x_0)$</p>
|
What is $\frac{d}{dx}\int_0^{x^2}f(t) \,dt$? | 2,105,638 | 5 | https://math.stackexchange.com/questions/2105638/what-is-fracddx-int-0x2ft-dt | <p>What is $\frac{d}{dx}\int_0^{x^2}f(t) dt$</p>
<p>My understanding is that if $F(t)$ is antiderivative of $f(t)$, then it should be $F(x^2)-F(0)$, but it is $f(x^2)(2x)$</p>
<p>The entire problem is as follow:</p>
<p>Find $f(4)$ if $\int_0^{x^2}f(t)\, dt=x\cos (\pi x)$</p>
<p>and the solution is </p>
<p>$\frac{d}{dx}\int_0^{x^2}f(t) \,dt=\cos \pi x -\pi x \sin \pi x\Rightarrow f(x^2)x=\cos \pi x -\pi x \sin \pi x \Rightarrow f(x^2)=\frac{\cos \pi x -\pi x \sin \pi x}{x}$</p>
<p>Thus $x=2\Rightarrow f(4)=1/4$</p>
| [
{
"answer_id": 2105640,
"score": 9,
"text": "<p>Let $u=x^2$ and apply chain rule.\n$$\\frac{d}{dx}\\int_0^{u}f(t) dt = f(u) \\frac{du}{dx}=f(x^2)\\cdot2x.$$</p>\n"
},
{
"answer_id": 2105694,
"score": 5,
"text": "<p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,\n$$\nF(x^2)-F(0) = \\int_0^{x^2}f(t)\\,dt.\n$$\nTo take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have\n$$\nF'(x^2)(x^2)' = \\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt.\n$$\n$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,\n$$\n\\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt = 2x\\,f(x^2).\n$$</p>\n"
},
{
"answer_id": 2105644,
"score": 2,
"text": "<p>As you said, if $f$ is continuous, then $\\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>\n\n<p>$$\\frac{\\partial}{\\partial x}\\int_0^{x^2}f(t)dt=F'(x^2)\\cdot 2x=2xf(x^2)$$</p>\n"
},
{
"answer_id": 2105657,
"score": 1,
"text": "<p>You can use this general formula : $$\\frac {d}{dx} \\int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>\n\n<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>\n"
},
{
"answer_id": 2106039,
"score": 1,
"text": "<p>Another (very slightly different) approach.</p>\n\n<p>Start by setting $u=x^2$, and define a function\n$$F(u) = \\int_0^u f(t)\\space dt$$\nThe second fundamental theorem of calculus tells us that $\\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>\n\n<p>Fortunately the chain rule comes to our rescue. We have\n$$\\frac{dF}{dx} = \\frac{dF}{du} \\cdot \\frac{du}{dx}$$\nThe second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>\n"
},
{
"answer_id": 2105640,
"score": 9,
"text": "<p>Let $u=x^2$ and apply chain rule.\n$$\\frac{d}{dx}\\int_0^{u}f(t) dt = f(u) \\frac{du}{dx}=f(x^2)\\cdot2x.$$</p>\n"
},
{
"answer_id": 2105694,
"score": 5,
"text": "<p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,\n$$\nF(x^2)-F(0) = \\int_0^{x^2}f(t)\\,dt.\n$$\nTo take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have\n$$\nF'(x^2)(x^2)' = \\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt.\n$$\n$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,\n$$\n\\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt = 2x\\,f(x^2).\n$$</p>\n"
},
{
"answer_id": 2105644,
"score": 2,
"text": "<p>As you said, if $f$ is continuous, then $\\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>\n\n<p>$$\\frac{\\partial}{\\partial x}\\int_0^{x^2}f(t)dt=F'(x^2)\\cdot 2x=2xf(x^2)$$</p>\n"
},
{
"answer_id": 2105657,
"score": 1,
"text": "<p>You can use this general formula : $$\\frac {d}{dx} \\int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>\n\n<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>\n"
},
{
"answer_id": 2106039,
"score": 1,
"text": "<p>Another (very slightly different) approach.</p>\n\n<p>Start by setting $u=x^2$, and define a function\n$$F(u) = \\int_0^u f(t)\\space dt$$\nThe second fundamental theorem of calculus tells us that $\\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>\n\n<p>Fortunately the chain rule comes to our rescue. We have\n$$\\frac{dF}{dx} = \\frac{dF}{du} \\cdot \\frac{du}{dx}$$\nThe second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>\n"
},
{
"answer_id": 2105640,
"score": 9,
"text": "<p>Let $u=x^2$ and apply chain rule.\n$$\\frac{d}{dx}\\int_0^{u}f(t) dt = f(u) \\frac{du}{dx}=f(x^2)\\cdot2x.$$</p>\n"
},
{
"answer_id": 2105694,
"score": 5,
"text": "<p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,\n$$\nF(x^2)-F(0) = \\int_0^{x^2}f(t)\\,dt.\n$$\nTo take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have\n$$\nF'(x^2)(x^2)' = \\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt.\n$$\n$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,\n$$\n\\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt = 2x\\,f(x^2).\n$$</p>\n"
},
{
"answer_id": 2105644,
"score": 2,
"text": "<p>As you said, if $f$ is continuous, then $\\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>\n\n<p>$$\\frac{\\partial}{\\partial x}\\int_0^{x^2}f(t)dt=F'(x^2)\\cdot 2x=2xf(x^2)$$</p>\n"
},
{
"answer_id": 2105657,
"score": 1,
"text": "<p>You can use this general formula : $$\\frac {d}{dx} \\int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>\n\n<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>\n"
},
{
"answer_id": 2106039,
"score": 1,
"text": "<p>Another (very slightly different) approach.</p>\n\n<p>Start by setting $u=x^2$, and define a function\n$$F(u) = \\int_0^u f(t)\\space dt$$\nThe second fundamental theorem of calculus tells us that $\\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>\n\n<p>Fortunately the chain rule comes to our rescue. We have\n$$\\frac{dF}{dx} = \\frac{dF}{du} \\cdot \\frac{du}{dx}$$\nThe second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>\n"
},
{
"answer_id": 2105640,
"score": 9,
"text": "<p>Let $u=x^2$ and apply chain rule.\n$$\\frac{d}{dx}\\int_0^{u}f(t) dt = f(u) \\frac{du}{dx}=f(x^2)\\cdot2x.$$</p>\n"
},
{
"answer_id": 2105694,
"score": 5,
"text": "<p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,\n$$\nF(x^2)-F(0) = \\int_0^{x^2}f(t)\\,dt.\n$$\nTo take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have\n$$\nF'(x^2)(x^2)' = \\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt.\n$$\n$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,\n$$\n\\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt = 2x\\,f(x^2).\n$$</p>\n"
},
{
"answer_id": 2105644,
"score": 2,
"text": "<p>As you said, if $f$ is continuous, then $\\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>\n\n<p>$$\\frac{\\partial}{\\partial x}\\int_0^{x^2}f(t)dt=F'(x^2)\\cdot 2x=2xf(x^2)$$</p>\n"
},
{
"answer_id": 2105657,
"score": 1,
"text": "<p>You can use this general formula : $$\\frac {d}{dx} \\int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>\n\n<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>\n"
},
{
"answer_id": 2106039,
"score": 1,
"text": "<p>Another (very slightly different) approach.</p>\n\n<p>Start by setting $u=x^2$, and define a function\n$$F(u) = \\int_0^u f(t)\\space dt$$\nThe second fundamental theorem of calculus tells us that $\\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>\n\n<p>Fortunately the chain rule comes to our rescue. We have\n$$\\frac{dF}{dx} = \\frac{dF}{du} \\cdot \\frac{du}{dx}$$\nThe second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>\n"
},
{
"answer_id": 2105640,
"score": 9,
"text": "<p>Let $u=x^2$ and apply chain rule.\n$$\\frac{d}{dx}\\int_0^{u}f(t) dt = f(u) \\frac{du}{dx}=f(x^2)\\cdot2x.$$</p>\n"
},
{
"answer_id": 2105694,
"score": 5,
"text": "<p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,\n$$\nF(x^2)-F(0) = \\int_0^{x^2}f(t)\\,dt.\n$$\nTo take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have\n$$\nF'(x^2)(x^2)' = \\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt.\n$$\n$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,\n$$\n\\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt = 2x\\,f(x^2).\n$$</p>\n"
},
{
"answer_id": 2105644,
"score": 2,
"text": "<p>As you said, if $f$ is continuous, then $\\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>\n\n<p>$$\\frac{\\partial}{\\partial x}\\int_0^{x^2}f(t)dt=F'(x^2)\\cdot 2x=2xf(x^2)$$</p>\n"
},
{
"answer_id": 2105657,
"score": 1,
"text": "<p>You can use this general formula : $$\\frac {d}{dx} \\int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>\n\n<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>\n"
},
{
"answer_id": 2106039,
"score": 1,
"text": "<p>Another (very slightly different) approach.</p>\n\n<p>Start by setting $u=x^2$, and define a function\n$$F(u) = \\int_0^u f(t)\\space dt$$\nThe second fundamental theorem of calculus tells us that $\\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>\n\n<p>Fortunately the chain rule comes to our rescue. We have\n$$\\frac{dF}{dx} = \\frac{dF}{du} \\cdot \\frac{du}{dx}$$\nThe second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>\n"
},
{
"answer_id": 2105640,
"score": 9,
"text": "<p>Let $u=x^2$ and apply chain rule.\n$$\\frac{d}{dx}\\int_0^{u}f(t) dt = f(u) \\frac{du}{dx}=f(x^2)\\cdot2x.$$</p>\n"
},
{
"answer_id": 2105694,
"score": 5,
"text": "<p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,\n$$\nF(x^2)-F(0) = \\int_0^{x^2}f(t)\\,dt.\n$$\nTo take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have\n$$\nF'(x^2)(x^2)' = \\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt.\n$$\n$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,\n$$\n\\frac{d}{dx}\\int_0^{x^2}f(t)\\,dt = 2x\\,f(x^2).\n$$</p>\n"
},
{
"answer_id": 2105644,
"score": 2,
"text": "<p>As you said, if $f$ is continuous, then $\\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>\n\n<p>$$\\frac{\\partial}{\\partial x}\\int_0^{x^2}f(t)dt=F'(x^2)\\cdot 2x=2xf(x^2)$$</p>\n"
},
{
"answer_id": 2105657,
"score": 1,
"text": "<p>You can use this general formula : $$\\frac {d}{dx} \\int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>\n\n<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>\n"
},
{
"answer_id": 2106039,
"score": 1,
"text": "<p>Another (very slightly different) approach.</p>\n\n<p>Start by setting $u=x^2$, and define a function\n$$F(u) = \\int_0^u f(t)\\space dt$$\nThe second fundamental theorem of calculus tells us that $\\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>\n\n<p>Fortunately the chain rule comes to our rescue. We have\n$$\\frac{dF}{dx} = \\frac{dF}{du} \\cdot \\frac{du}{dx}$$\nThe second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>\n"
}
] | Question: What is $\frac{d}{dx}\int_0^{x^2}f(t) \,dt$?
Answer 1: <p>Let $u=x^2$ and apply chain rule.
$$\frac{d}{dx}\int_0^{u}f(t) dt = f(u) \frac{du}{dx}=f(x^2)\cdot2x.$$</p>
Answer 2: <p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,
$$
F(x^2)-F(0) = \int_0^{x^2}f(t)\,dt.
$$
To take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have
$$
F'(x^2)(x^2)' = \frac{d}{dx}\int_0^{x^2}f(t)\,dt.
$$
$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,
$$
\frac{d}{dx}\int_0^{x^2}f(t)\,dt = 2x\,f(x^2).
$$</p>
Answer 3: <p>As you said, if $f$ is continuous, then $\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>
<p>$$\frac{\partial}{\partial x}\int_0^{x^2}f(t)dt=F'(x^2)\cdot 2x=2xf(x^2)$$</p>
Answer 4: <p>You can use this general formula : $$\frac {d}{dx} \int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>
<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>
Answer 5: <p>Another (very slightly different) approach.</p>
<p>Start by setting $u=x^2$, and define a function
$$F(u) = \int_0^u f(t)\space dt$$
The second fundamental theorem of calculus tells us that $\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>
<p>Fortunately the chain rule comes to our rescue. We have
$$\frac{dF}{dx} = \frac{dF}{du} \cdot \frac{du}{dx}$$
The second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>
Answer 6: <p>Let $u=x^2$ and apply chain rule.
$$\frac{d}{dx}\int_0^{u}f(t) dt = f(u) \frac{du}{dx}=f(x^2)\cdot2x.$$</p>
Answer 7: <p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,
$$
F(x^2)-F(0) = \int_0^{x^2}f(t)\,dt.
$$
To take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have
$$
F'(x^2)(x^2)' = \frac{d}{dx}\int_0^{x^2}f(t)\,dt.
$$
$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,
$$
\frac{d}{dx}\int_0^{x^2}f(t)\,dt = 2x\,f(x^2).
$$</p>
Answer 8: <p>As you said, if $f$ is continuous, then $\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>
<p>$$\frac{\partial}{\partial x}\int_0^{x^2}f(t)dt=F'(x^2)\cdot 2x=2xf(x^2)$$</p>
Answer 9: <p>You can use this general formula : $$\frac {d}{dx} \int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>
<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>
Answer 10: <p>Another (very slightly different) approach.</p>
<p>Start by setting $u=x^2$, and define a function
$$F(u) = \int_0^u f(t)\space dt$$
The second fundamental theorem of calculus tells us that $\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>
<p>Fortunately the chain rule comes to our rescue. We have
$$\frac{dF}{dx} = \frac{dF}{du} \cdot \frac{du}{dx}$$
The second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>
Answer 11: <p>Let $u=x^2$ and apply chain rule.
$$\frac{d}{dx}\int_0^{u}f(t) dt = f(u) \frac{du}{dx}=f(x^2)\cdot2x.$$</p>
Answer 12: <p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,
$$
F(x^2)-F(0) = \int_0^{x^2}f(t)\,dt.
$$
To take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have
$$
F'(x^2)(x^2)' = \frac{d}{dx}\int_0^{x^2}f(t)\,dt.
$$
$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,
$$
\frac{d}{dx}\int_0^{x^2}f(t)\,dt = 2x\,f(x^2).
$$</p>
Answer 13: <p>As you said, if $f$ is continuous, then $\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>
<p>$$\frac{\partial}{\partial x}\int_0^{x^2}f(t)dt=F'(x^2)\cdot 2x=2xf(x^2)$$</p>
Answer 14: <p>You can use this general formula : $$\frac {d}{dx} \int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>
<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>
Answer 15: <p>Another (very slightly different) approach.</p>
<p>Start by setting $u=x^2$, and define a function
$$F(u) = \int_0^u f(t)\space dt$$
The second fundamental theorem of calculus tells us that $\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>
<p>Fortunately the chain rule comes to our rescue. We have
$$\frac{dF}{dx} = \frac{dF}{du} \cdot \frac{du}{dx}$$
The second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>
Answer 16: <p>Let $u=x^2$ and apply chain rule.
$$\frac{d}{dx}\int_0^{u}f(t) dt = f(u) \frac{du}{dx}=f(x^2)\cdot2x.$$</p>
Answer 17: <p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,
$$
F(x^2)-F(0) = \int_0^{x^2}f(t)\,dt.
$$
To take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have
$$
F'(x^2)(x^2)' = \frac{d}{dx}\int_0^{x^2}f(t)\,dt.
$$
$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,
$$
\frac{d}{dx}\int_0^{x^2}f(t)\,dt = 2x\,f(x^2).
$$</p>
Answer 18: <p>As you said, if $f$ is continuous, then $\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>
<p>$$\frac{\partial}{\partial x}\int_0^{x^2}f(t)dt=F'(x^2)\cdot 2x=2xf(x^2)$$</p>
Answer 19: <p>You can use this general formula : $$\frac {d}{dx} \int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>
<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>
Answer 20: <p>Another (very slightly different) approach.</p>
<p>Start by setting $u=x^2$, and define a function
$$F(u) = \int_0^u f(t)\space dt$$
The second fundamental theorem of calculus tells us that $\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>
<p>Fortunately the chain rule comes to our rescue. We have
$$\frac{dF}{dx} = \frac{dF}{du} \cdot \frac{du}{dx}$$
The second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>
Answer 21: <p>Let $u=x^2$ and apply chain rule.
$$\frac{d}{dx}\int_0^{u}f(t) dt = f(u) \frac{du}{dx}=f(x^2)\cdot2x.$$</p>
Answer 22: <p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,
$$
F(x^2)-F(0) = \int_0^{x^2}f(t)\,dt.
$$
To take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have
$$
F'(x^2)(x^2)' = \frac{d}{dx}\int_0^{x^2}f(t)\,dt.
$$
$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,
$$
\frac{d}{dx}\int_0^{x^2}f(t)\,dt = 2x\,f(x^2).
$$</p>
Answer 23: <p>As you said, if $f$ is continuous, then $\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>
<p>$$\frac{\partial}{\partial x}\int_0^{x^2}f(t)dt=F'(x^2)\cdot 2x=2xf(x^2)$$</p>
Answer 24: <p>You can use this general formula : $$\frac {d}{dx} \int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>
<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>
Answer 25: <p>Another (very slightly different) approach.</p>
<p>Start by setting $u=x^2$, and define a function
$$F(u) = \int_0^u f(t)\space dt$$
The second fundamental theorem of calculus tells us that $\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>
<p>Fortunately the chain rule comes to our rescue. We have
$$\frac{dF}{dx} = \frac{dF}{du} \cdot \frac{du}{dx}$$
The second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>
Answer 26: <p>Let $u=x^2$ and apply chain rule.
$$\frac{d}{dx}\int_0^{u}f(t) dt = f(u) \frac{du}{dx}=f(x^2)\cdot2x.$$</p>
Answer 27: <p>You were correct in assessing the situation in terms of an antiderivative of $f$, but you didn't follow through. If $F$ is an antiderivative of $f$, by the Fundamental Theorem of Calculus,
$$
F(x^2)-F(0) = \int_0^{x^2}f(t)\,dt.
$$
To take the derivative, we apply the chain rule since we have to differentiate the composite $F(x^2)$. Since $F(0)$ is a constant, we have
$$
F'(x^2)(x^2)' = \frac{d}{dx}\int_0^{x^2}f(t)\,dt.
$$
$F$ is an antiderivative of $f$, so $F'(x^2) = f(x^2)$. Putting it all together,
$$
\frac{d}{dx}\int_0^{x^2}f(t)\,dt = 2x\,f(x^2).
$$</p>
Answer 28: <p>As you said, if $f$ is continuous, then $\int_0^{x^2}f(t)dt=F(x^2)-F(0)$, so by taking the derivative on both sides using the chain rule one gets:</p>
<p>$$\frac{\partial}{\partial x}\int_0^{x^2}f(t)dt=F'(x^2)\cdot 2x=2xf(x^2)$$</p>
Answer 29: <p>You can use this general formula : $$\frac {d}{dx} \int_{u(x)}^{v(x)} f(t) dt = f(v(x))v'(x)-f(u(x))u'(x).$$ </p>
<p>Note that $u(x)$ and $v(x)$ must be differentiable which is your case.</p>
Answer 30: <p>Another (very slightly different) approach.</p>
<p>Start by setting $u=x^2$, and define a function
$$F(u) = \int_0^u f(t)\space dt$$
The second fundamental theorem of calculus tells us that $\frac{dF}{du} =f(u) = f(x^2)$. But <em>that's not what we need to compute</em>. We are asked to find $\frac{dF}{dx}$. (Notice the different variable in the bottom of the derivative.)</p>
<p>Fortunately the chain rule comes to our rescue. We have
$$\frac{dF}{dx} = \frac{dF}{du} \cdot \frac{du}{dx}$$
The second factor on the right tells you where the unexpected $2x$ in the solution comes from.</p>
|
Evaluating $\lim_{r \to \infty} r^c \frac{\int_0^{\pi/2}x^r \sin x dx}{\int_0^{\pi/2}x^r \cos x dx}$ | 2,105,761 | 5 | https://math.stackexchange.com/questions/2105761/evaluating-lim-r-to-infty-rc-frac-int-0-pi-2xr-sin-x-dx-int-0 | <p>The question is basically to find out the following limit($c$ is any real number)
$$\lim_{r \to \infty} r^c \frac{\int_0^{\pi/2} x^r \sin x \, dx}{\int_0^{\pi/2}x^r \cos x \, dx}$$</p>
<p>I tried to use the property of definite integral and rewrote it as $$\lim_{r \to \infty} r^c \frac{\int_0^{\pi/2}(\pi/2-x)^r \cos x dx}{\int_0^{\pi/2}x^r \cos x dx}$$ and then expanded the numerator using binomial theorem.But this didnot take me to the answer.I also tried using integration by parts in the numerator but that too didnot helped me.Any hint to go ahead will be highly appreciated.Thanks.</p>
| [
{
"answer_id": 2105788,
"score": 6,
"text": "<p>In fact, integration by parts works in this case. For convenience, we write</p>\n\n<p>$$ S(r) := \\int_{0}^{\\pi/2} x^r \\sin x \\, dx, \\qquad C(r) := \\int_{0}^{\\pi/2} x^r \\cos x \\, dx. $$</p>\n\n<hr>\n\n<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \\frac{\\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>\n\n<p>$$ S(r) \\approx \\int_{0}^{\\pi/2} x^r \\, dx = \\frac{(\\pi/2)^{r+1}}{r+1}$$</p>\n\n<p>and</p>\n\n<p>$$ C(r) \\approx \\int_{0}^{\\pi/2} x^r \\left(\\frac{\\pi}{2} - x\\right) \\, dx\n= \\frac{(\\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>\n\n<p>This forces that $c = -1$.</p>\n\n<hr>\n\n<p><strong>Solution.</strong> We make the following two observations:</p>\n\n<ul>\n<li><p>Applying integration by parts, for $r > -1$ we have\n$$ C(r) = \\frac{1}{r+1}S(r+1), \\qquad S(r) = \\frac{(\\pi/2)^{r+1}}{r+1} - \\frac{1}{r+1}C(r+1). \\tag{*} $$</p></li>\n<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound\n$$ |S(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}, \\qquad |C(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}. $$</p></li>\n</ul>\n\n<p>Thus if $r \\gg 1$, then applying $\\text{(*)}$ twice,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= \\frac{(\\pi/2)^{r+1} - C(r+1)}{S(r+1)}\n= (r+2) \\frac{(\\pi/2)^{r+1} - C(r+1)}{(\\pi/2)^{r+2}- C(r+2)}.$$</p>\n\n<p>Dividing both the numerator and the denominator by $(\\pi/2)^{r+1}$ and utilizing our crude bound,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= (r+2) \\frac{1 + \\mathcal{O}(r^{-1})}{(\\pi/2) + \\mathcal{O}(r^{-1})}. $$</p>\n\n<p>Therefore</p>\n\n<p>$$ \\lim_{r\\to\\infty} r^{-1} \\frac{S(r)}{C(r)} = \\frac{2}{\\pi} $$</p>\n\n<p>and $c = -1$ is only the possible choice.</p>\n"
},
{
"answer_id": 2105782,
"score": 0,
"text": "<p>Show that\n$$\\int_0^{\\pi/2}x^r \\sin(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\sin(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}a_r$$</p>\n\n<p>And integrating by parts\n$$a_r=\\frac{1}{r+1}-\\frac{\\pi}{2(r+1)}\\int_0^1t^{r+1}\\cos(\\pi t/2)dt$$</p>\n\n<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\\to +\\infty$.</p>\n\n<p>In the same way:</p>\n\n<p>$$\\int_0^{\\pi/2}x^r \\cos(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\cos(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}b_r$$</p>\n\n<p>and we have $b_r=\\frac{\\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>\n"
},
{
"answer_id": 2105788,
"score": 6,
"text": "<p>In fact, integration by parts works in this case. For convenience, we write</p>\n\n<p>$$ S(r) := \\int_{0}^{\\pi/2} x^r \\sin x \\, dx, \\qquad C(r) := \\int_{0}^{\\pi/2} x^r \\cos x \\, dx. $$</p>\n\n<hr>\n\n<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \\frac{\\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>\n\n<p>$$ S(r) \\approx \\int_{0}^{\\pi/2} x^r \\, dx = \\frac{(\\pi/2)^{r+1}}{r+1}$$</p>\n\n<p>and</p>\n\n<p>$$ C(r) \\approx \\int_{0}^{\\pi/2} x^r \\left(\\frac{\\pi}{2} - x\\right) \\, dx\n= \\frac{(\\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>\n\n<p>This forces that $c = -1$.</p>\n\n<hr>\n\n<p><strong>Solution.</strong> We make the following two observations:</p>\n\n<ul>\n<li><p>Applying integration by parts, for $r > -1$ we have\n$$ C(r) = \\frac{1}{r+1}S(r+1), \\qquad S(r) = \\frac{(\\pi/2)^{r+1}}{r+1} - \\frac{1}{r+1}C(r+1). \\tag{*} $$</p></li>\n<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound\n$$ |S(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}, \\qquad |C(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}. $$</p></li>\n</ul>\n\n<p>Thus if $r \\gg 1$, then applying $\\text{(*)}$ twice,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= \\frac{(\\pi/2)^{r+1} - C(r+1)}{S(r+1)}\n= (r+2) \\frac{(\\pi/2)^{r+1} - C(r+1)}{(\\pi/2)^{r+2}- C(r+2)}.$$</p>\n\n<p>Dividing both the numerator and the denominator by $(\\pi/2)^{r+1}$ and utilizing our crude bound,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= (r+2) \\frac{1 + \\mathcal{O}(r^{-1})}{(\\pi/2) + \\mathcal{O}(r^{-1})}. $$</p>\n\n<p>Therefore</p>\n\n<p>$$ \\lim_{r\\to\\infty} r^{-1} \\frac{S(r)}{C(r)} = \\frac{2}{\\pi} $$</p>\n\n<p>and $c = -1$ is only the possible choice.</p>\n"
},
{
"answer_id": 2105782,
"score": 0,
"text": "<p>Show that\n$$\\int_0^{\\pi/2}x^r \\sin(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\sin(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}a_r$$</p>\n\n<p>And integrating by parts\n$$a_r=\\frac{1}{r+1}-\\frac{\\pi}{2(r+1)}\\int_0^1t^{r+1}\\cos(\\pi t/2)dt$$</p>\n\n<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\\to +\\infty$.</p>\n\n<p>In the same way:</p>\n\n<p>$$\\int_0^{\\pi/2}x^r \\cos(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\cos(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}b_r$$</p>\n\n<p>and we have $b_r=\\frac{\\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>\n"
},
{
"answer_id": 2105788,
"score": 6,
"text": "<p>In fact, integration by parts works in this case. For convenience, we write</p>\n\n<p>$$ S(r) := \\int_{0}^{\\pi/2} x^r \\sin x \\, dx, \\qquad C(r) := \\int_{0}^{\\pi/2} x^r \\cos x \\, dx. $$</p>\n\n<hr>\n\n<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \\frac{\\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>\n\n<p>$$ S(r) \\approx \\int_{0}^{\\pi/2} x^r \\, dx = \\frac{(\\pi/2)^{r+1}}{r+1}$$</p>\n\n<p>and</p>\n\n<p>$$ C(r) \\approx \\int_{0}^{\\pi/2} x^r \\left(\\frac{\\pi}{2} - x\\right) \\, dx\n= \\frac{(\\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>\n\n<p>This forces that $c = -1$.</p>\n\n<hr>\n\n<p><strong>Solution.</strong> We make the following two observations:</p>\n\n<ul>\n<li><p>Applying integration by parts, for $r > -1$ we have\n$$ C(r) = \\frac{1}{r+1}S(r+1), \\qquad S(r) = \\frac{(\\pi/2)^{r+1}}{r+1} - \\frac{1}{r+1}C(r+1). \\tag{*} $$</p></li>\n<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound\n$$ |S(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}, \\qquad |C(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}. $$</p></li>\n</ul>\n\n<p>Thus if $r \\gg 1$, then applying $\\text{(*)}$ twice,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= \\frac{(\\pi/2)^{r+1} - C(r+1)}{S(r+1)}\n= (r+2) \\frac{(\\pi/2)^{r+1} - C(r+1)}{(\\pi/2)^{r+2}- C(r+2)}.$$</p>\n\n<p>Dividing both the numerator and the denominator by $(\\pi/2)^{r+1}$ and utilizing our crude bound,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= (r+2) \\frac{1 + \\mathcal{O}(r^{-1})}{(\\pi/2) + \\mathcal{O}(r^{-1})}. $$</p>\n\n<p>Therefore</p>\n\n<p>$$ \\lim_{r\\to\\infty} r^{-1} \\frac{S(r)}{C(r)} = \\frac{2}{\\pi} $$</p>\n\n<p>and $c = -1$ is only the possible choice.</p>\n"
},
{
"answer_id": 2105782,
"score": 0,
"text": "<p>Show that\n$$\\int_0^{\\pi/2}x^r \\sin(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\sin(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}a_r$$</p>\n\n<p>And integrating by parts\n$$a_r=\\frac{1}{r+1}-\\frac{\\pi}{2(r+1)}\\int_0^1t^{r+1}\\cos(\\pi t/2)dt$$</p>\n\n<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\\to +\\infty$.</p>\n\n<p>In the same way:</p>\n\n<p>$$\\int_0^{\\pi/2}x^r \\cos(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\cos(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}b_r$$</p>\n\n<p>and we have $b_r=\\frac{\\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>\n"
},
{
"answer_id": 2105788,
"score": 6,
"text": "<p>In fact, integration by parts works in this case. For convenience, we write</p>\n\n<p>$$ S(r) := \\int_{0}^{\\pi/2} x^r \\sin x \\, dx, \\qquad C(r) := \\int_{0}^{\\pi/2} x^r \\cos x \\, dx. $$</p>\n\n<hr>\n\n<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \\frac{\\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>\n\n<p>$$ S(r) \\approx \\int_{0}^{\\pi/2} x^r \\, dx = \\frac{(\\pi/2)^{r+1}}{r+1}$$</p>\n\n<p>and</p>\n\n<p>$$ C(r) \\approx \\int_{0}^{\\pi/2} x^r \\left(\\frac{\\pi}{2} - x\\right) \\, dx\n= \\frac{(\\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>\n\n<p>This forces that $c = -1$.</p>\n\n<hr>\n\n<p><strong>Solution.</strong> We make the following two observations:</p>\n\n<ul>\n<li><p>Applying integration by parts, for $r > -1$ we have\n$$ C(r) = \\frac{1}{r+1}S(r+1), \\qquad S(r) = \\frac{(\\pi/2)^{r+1}}{r+1} - \\frac{1}{r+1}C(r+1). \\tag{*} $$</p></li>\n<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound\n$$ |S(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}, \\qquad |C(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}. $$</p></li>\n</ul>\n\n<p>Thus if $r \\gg 1$, then applying $\\text{(*)}$ twice,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= \\frac{(\\pi/2)^{r+1} - C(r+1)}{S(r+1)}\n= (r+2) \\frac{(\\pi/2)^{r+1} - C(r+1)}{(\\pi/2)^{r+2}- C(r+2)}.$$</p>\n\n<p>Dividing both the numerator and the denominator by $(\\pi/2)^{r+1}$ and utilizing our crude bound,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= (r+2) \\frac{1 + \\mathcal{O}(r^{-1})}{(\\pi/2) + \\mathcal{O}(r^{-1})}. $$</p>\n\n<p>Therefore</p>\n\n<p>$$ \\lim_{r\\to\\infty} r^{-1} \\frac{S(r)}{C(r)} = \\frac{2}{\\pi} $$</p>\n\n<p>and $c = -1$ is only the possible choice.</p>\n"
},
{
"answer_id": 2105782,
"score": 0,
"text": "<p>Show that\n$$\\int_0^{\\pi/2}x^r \\sin(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\sin(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}a_r$$</p>\n\n<p>And integrating by parts\n$$a_r=\\frac{1}{r+1}-\\frac{\\pi}{2(r+1)}\\int_0^1t^{r+1}\\cos(\\pi t/2)dt$$</p>\n\n<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\\to +\\infty$.</p>\n\n<p>In the same way:</p>\n\n<p>$$\\int_0^{\\pi/2}x^r \\cos(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\cos(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}b_r$$</p>\n\n<p>and we have $b_r=\\frac{\\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>\n"
},
{
"answer_id": 2105788,
"score": 6,
"text": "<p>In fact, integration by parts works in this case. For convenience, we write</p>\n\n<p>$$ S(r) := \\int_{0}^{\\pi/2} x^r \\sin x \\, dx, \\qquad C(r) := \\int_{0}^{\\pi/2} x^r \\cos x \\, dx. $$</p>\n\n<hr>\n\n<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \\frac{\\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>\n\n<p>$$ S(r) \\approx \\int_{0}^{\\pi/2} x^r \\, dx = \\frac{(\\pi/2)^{r+1}}{r+1}$$</p>\n\n<p>and</p>\n\n<p>$$ C(r) \\approx \\int_{0}^{\\pi/2} x^r \\left(\\frac{\\pi}{2} - x\\right) \\, dx\n= \\frac{(\\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>\n\n<p>This forces that $c = -1$.</p>\n\n<hr>\n\n<p><strong>Solution.</strong> We make the following two observations:</p>\n\n<ul>\n<li><p>Applying integration by parts, for $r > -1$ we have\n$$ C(r) = \\frac{1}{r+1}S(r+1), \\qquad S(r) = \\frac{(\\pi/2)^{r+1}}{r+1} - \\frac{1}{r+1}C(r+1). \\tag{*} $$</p></li>\n<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound\n$$ |S(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}, \\qquad |C(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}. $$</p></li>\n</ul>\n\n<p>Thus if $r \\gg 1$, then applying $\\text{(*)}$ twice,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= \\frac{(\\pi/2)^{r+1} - C(r+1)}{S(r+1)}\n= (r+2) \\frac{(\\pi/2)^{r+1} - C(r+1)}{(\\pi/2)^{r+2}- C(r+2)}.$$</p>\n\n<p>Dividing both the numerator and the denominator by $(\\pi/2)^{r+1}$ and utilizing our crude bound,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= (r+2) \\frac{1 + \\mathcal{O}(r^{-1})}{(\\pi/2) + \\mathcal{O}(r^{-1})}. $$</p>\n\n<p>Therefore</p>\n\n<p>$$ \\lim_{r\\to\\infty} r^{-1} \\frac{S(r)}{C(r)} = \\frac{2}{\\pi} $$</p>\n\n<p>and $c = -1$ is only the possible choice.</p>\n"
},
{
"answer_id": 2105782,
"score": 0,
"text": "<p>Show that\n$$\\int_0^{\\pi/2}x^r \\sin(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\sin(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}a_r$$</p>\n\n<p>And integrating by parts\n$$a_r=\\frac{1}{r+1}-\\frac{\\pi}{2(r+1)}\\int_0^1t^{r+1}\\cos(\\pi t/2)dt$$</p>\n\n<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\\to +\\infty$.</p>\n\n<p>In the same way:</p>\n\n<p>$$\\int_0^{\\pi/2}x^r \\cos(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\cos(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}b_r$$</p>\n\n<p>and we have $b_r=\\frac{\\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>\n"
},
{
"answer_id": 2105788,
"score": 6,
"text": "<p>In fact, integration by parts works in this case. For convenience, we write</p>\n\n<p>$$ S(r) := \\int_{0}^{\\pi/2} x^r \\sin x \\, dx, \\qquad C(r) := \\int_{0}^{\\pi/2} x^r \\cos x \\, dx. $$</p>\n\n<hr>\n\n<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \\frac{\\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>\n\n<p>$$ S(r) \\approx \\int_{0}^{\\pi/2} x^r \\, dx = \\frac{(\\pi/2)^{r+1}}{r+1}$$</p>\n\n<p>and</p>\n\n<p>$$ C(r) \\approx \\int_{0}^{\\pi/2} x^r \\left(\\frac{\\pi}{2} - x\\right) \\, dx\n= \\frac{(\\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>\n\n<p>This forces that $c = -1$.</p>\n\n<hr>\n\n<p><strong>Solution.</strong> We make the following two observations:</p>\n\n<ul>\n<li><p>Applying integration by parts, for $r > -1$ we have\n$$ C(r) = \\frac{1}{r+1}S(r+1), \\qquad S(r) = \\frac{(\\pi/2)^{r+1}}{r+1} - \\frac{1}{r+1}C(r+1). \\tag{*} $$</p></li>\n<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound\n$$ |S(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}, \\qquad |C(r)| \\leq \\frac{(\\pi/2)^{r+1}}{r+1}. $$</p></li>\n</ul>\n\n<p>Thus if $r \\gg 1$, then applying $\\text{(*)}$ twice,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= \\frac{(\\pi/2)^{r+1} - C(r+1)}{S(r+1)}\n= (r+2) \\frac{(\\pi/2)^{r+1} - C(r+1)}{(\\pi/2)^{r+2}- C(r+2)}.$$</p>\n\n<p>Dividing both the numerator and the denominator by $(\\pi/2)^{r+1}$ and utilizing our crude bound,</p>\n\n<p>$$\\frac{S(r)}{C(r)}\n= (r+2) \\frac{1 + \\mathcal{O}(r^{-1})}{(\\pi/2) + \\mathcal{O}(r^{-1})}. $$</p>\n\n<p>Therefore</p>\n\n<p>$$ \\lim_{r\\to\\infty} r^{-1} \\frac{S(r)}{C(r)} = \\frac{2}{\\pi} $$</p>\n\n<p>and $c = -1$ is only the possible choice.</p>\n"
},
{
"answer_id": 2105782,
"score": 0,
"text": "<p>Show that\n$$\\int_0^{\\pi/2}x^r \\sin(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\sin(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}a_r$$</p>\n\n<p>And integrating by parts\n$$a_r=\\frac{1}{r+1}-\\frac{\\pi}{2(r+1)}\\int_0^1t^{r+1}\\cos(\\pi t/2)dt$$</p>\n\n<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\\to +\\infty$.</p>\n\n<p>In the same way:</p>\n\n<p>$$\\int_0^{\\pi/2}x^r \\cos(x)dx=(\\frac{\\pi}{2})^{r+1}\\int_0^1t^r \\cos(\\pi t/2)dt=(\\frac{\\pi}{2})^{r+1}b_r$$</p>\n\n<p>and we have $b_r=\\frac{\\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>\n"
}
] | Question: Evaluating $\lim_{r \to \infty} r^c \frac{\int_0^{\pi/2}x^r \sin x dx}{\int_0^{\pi/2}x^r \cos x dx}$
Answer 1: <p>In fact, integration by parts works in this case. For convenience, we write</p>
<p>$$ S(r) := \int_{0}^{\pi/2} x^r \sin x \, dx, \qquad C(r) := \int_{0}^{\pi/2} x^r \cos x \, dx. $$</p>
<hr>
<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \frac{\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>
<p>$$ S(r) \approx \int_{0}^{\pi/2} x^r \, dx = \frac{(\pi/2)^{r+1}}{r+1}$$</p>
<p>and</p>
<p>$$ C(r) \approx \int_{0}^{\pi/2} x^r \left(\frac{\pi}{2} - x\right) \, dx
= \frac{(\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>
<p>This forces that $c = -1$.</p>
<hr>
<p><strong>Solution.</strong> We make the following two observations:</p>
<ul>
<li><p>Applying integration by parts, for $r > -1$ we have
$$ C(r) = \frac{1}{r+1}S(r+1), \qquad S(r) = \frac{(\pi/2)^{r+1}}{r+1} - \frac{1}{r+1}C(r+1). \tag{*} $$</p></li>
<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound
$$ |S(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}, \qquad |C(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}. $$</p></li>
</ul>
<p>Thus if $r \gg 1$, then applying $\text{(*)}$ twice,</p>
<p>$$\frac{S(r)}{C(r)}
= \frac{(\pi/2)^{r+1} - C(r+1)}{S(r+1)}
= (r+2) \frac{(\pi/2)^{r+1} - C(r+1)}{(\pi/2)^{r+2}- C(r+2)}.$$</p>
<p>Dividing both the numerator and the denominator by $(\pi/2)^{r+1}$ and utilizing our crude bound,</p>
<p>$$\frac{S(r)}{C(r)}
= (r+2) \frac{1 + \mathcal{O}(r^{-1})}{(\pi/2) + \mathcal{O}(r^{-1})}. $$</p>
<p>Therefore</p>
<p>$$ \lim_{r\to\infty} r^{-1} \frac{S(r)}{C(r)} = \frac{2}{\pi} $$</p>
<p>and $c = -1$ is only the possible choice.</p>
Answer 2: <p>Show that
$$\int_0^{\pi/2}x^r \sin(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \sin(\pi t/2)dt=(\frac{\pi}{2})^{r+1}a_r$$</p>
<p>And integrating by parts
$$a_r=\frac{1}{r+1}-\frac{\pi}{2(r+1)}\int_0^1t^{r+1}\cos(\pi t/2)dt$$</p>
<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\to +\infty$.</p>
<p>In the same way:</p>
<p>$$\int_0^{\pi/2}x^r \cos(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \cos(\pi t/2)dt=(\frac{\pi}{2})^{r+1}b_r$$</p>
<p>and we have $b_r=\frac{\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>
Answer 3: <p>In fact, integration by parts works in this case. For convenience, we write</p>
<p>$$ S(r) := \int_{0}^{\pi/2} x^r \sin x \, dx, \qquad C(r) := \int_{0}^{\pi/2} x^r \cos x \, dx. $$</p>
<hr>
<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \frac{\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>
<p>$$ S(r) \approx \int_{0}^{\pi/2} x^r \, dx = \frac{(\pi/2)^{r+1}}{r+1}$$</p>
<p>and</p>
<p>$$ C(r) \approx \int_{0}^{\pi/2} x^r \left(\frac{\pi}{2} - x\right) \, dx
= \frac{(\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>
<p>This forces that $c = -1$.</p>
<hr>
<p><strong>Solution.</strong> We make the following two observations:</p>
<ul>
<li><p>Applying integration by parts, for $r > -1$ we have
$$ C(r) = \frac{1}{r+1}S(r+1), \qquad S(r) = \frac{(\pi/2)^{r+1}}{r+1} - \frac{1}{r+1}C(r+1). \tag{*} $$</p></li>
<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound
$$ |S(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}, \qquad |C(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}. $$</p></li>
</ul>
<p>Thus if $r \gg 1$, then applying $\text{(*)}$ twice,</p>
<p>$$\frac{S(r)}{C(r)}
= \frac{(\pi/2)^{r+1} - C(r+1)}{S(r+1)}
= (r+2) \frac{(\pi/2)^{r+1} - C(r+1)}{(\pi/2)^{r+2}- C(r+2)}.$$</p>
<p>Dividing both the numerator and the denominator by $(\pi/2)^{r+1}$ and utilizing our crude bound,</p>
<p>$$\frac{S(r)}{C(r)}
= (r+2) \frac{1 + \mathcal{O}(r^{-1})}{(\pi/2) + \mathcal{O}(r^{-1})}. $$</p>
<p>Therefore</p>
<p>$$ \lim_{r\to\infty} r^{-1} \frac{S(r)}{C(r)} = \frac{2}{\pi} $$</p>
<p>and $c = -1$ is only the possible choice.</p>
Answer 4: <p>Show that
$$\int_0^{\pi/2}x^r \sin(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \sin(\pi t/2)dt=(\frac{\pi}{2})^{r+1}a_r$$</p>
<p>And integrating by parts
$$a_r=\frac{1}{r+1}-\frac{\pi}{2(r+1)}\int_0^1t^{r+1}\cos(\pi t/2)dt$$</p>
<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\to +\infty$.</p>
<p>In the same way:</p>
<p>$$\int_0^{\pi/2}x^r \cos(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \cos(\pi t/2)dt=(\frac{\pi}{2})^{r+1}b_r$$</p>
<p>and we have $b_r=\frac{\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>
Answer 5: <p>In fact, integration by parts works in this case. For convenience, we write</p>
<p>$$ S(r) := \int_{0}^{\pi/2} x^r \sin x \, dx, \qquad C(r) := \int_{0}^{\pi/2} x^r \cos x \, dx. $$</p>
<hr>
<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \frac{\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>
<p>$$ S(r) \approx \int_{0}^{\pi/2} x^r \, dx = \frac{(\pi/2)^{r+1}}{r+1}$$</p>
<p>and</p>
<p>$$ C(r) \approx \int_{0}^{\pi/2} x^r \left(\frac{\pi}{2} - x\right) \, dx
= \frac{(\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>
<p>This forces that $c = -1$.</p>
<hr>
<p><strong>Solution.</strong> We make the following two observations:</p>
<ul>
<li><p>Applying integration by parts, for $r > -1$ we have
$$ C(r) = \frac{1}{r+1}S(r+1), \qquad S(r) = \frac{(\pi/2)^{r+1}}{r+1} - \frac{1}{r+1}C(r+1). \tag{*} $$</p></li>
<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound
$$ |S(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}, \qquad |C(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}. $$</p></li>
</ul>
<p>Thus if $r \gg 1$, then applying $\text{(*)}$ twice,</p>
<p>$$\frac{S(r)}{C(r)}
= \frac{(\pi/2)^{r+1} - C(r+1)}{S(r+1)}
= (r+2) \frac{(\pi/2)^{r+1} - C(r+1)}{(\pi/2)^{r+2}- C(r+2)}.$$</p>
<p>Dividing both the numerator and the denominator by $(\pi/2)^{r+1}$ and utilizing our crude bound,</p>
<p>$$\frac{S(r)}{C(r)}
= (r+2) \frac{1 + \mathcal{O}(r^{-1})}{(\pi/2) + \mathcal{O}(r^{-1})}. $$</p>
<p>Therefore</p>
<p>$$ \lim_{r\to\infty} r^{-1} \frac{S(r)}{C(r)} = \frac{2}{\pi} $$</p>
<p>and $c = -1$ is only the possible choice.</p>
Answer 6: <p>Show that
$$\int_0^{\pi/2}x^r \sin(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \sin(\pi t/2)dt=(\frac{\pi}{2})^{r+1}a_r$$</p>
<p>And integrating by parts
$$a_r=\frac{1}{r+1}-\frac{\pi}{2(r+1)}\int_0^1t^{r+1}\cos(\pi t/2)dt$$</p>
<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\to +\infty$.</p>
<p>In the same way:</p>
<p>$$\int_0^{\pi/2}x^r \cos(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \cos(\pi t/2)dt=(\frac{\pi}{2})^{r+1}b_r$$</p>
<p>and we have $b_r=\frac{\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>
Answer 7: <p>In fact, integration by parts works in this case. For convenience, we write</p>
<p>$$ S(r) := \int_{0}^{\pi/2} x^r \sin x \, dx, \qquad C(r) := \int_{0}^{\pi/2} x^r \cos x \, dx. $$</p>
<hr>
<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \frac{\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>
<p>$$ S(r) \approx \int_{0}^{\pi/2} x^r \, dx = \frac{(\pi/2)^{r+1}}{r+1}$$</p>
<p>and</p>
<p>$$ C(r) \approx \int_{0}^{\pi/2} x^r \left(\frac{\pi}{2} - x\right) \, dx
= \frac{(\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>
<p>This forces that $c = -1$.</p>
<hr>
<p><strong>Solution.</strong> We make the following two observations:</p>
<ul>
<li><p>Applying integration by parts, for $r > -1$ we have
$$ C(r) = \frac{1}{r+1}S(r+1), \qquad S(r) = \frac{(\pi/2)^{r+1}}{r+1} - \frac{1}{r+1}C(r+1). \tag{*} $$</p></li>
<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound
$$ |S(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}, \qquad |C(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}. $$</p></li>
</ul>
<p>Thus if $r \gg 1$, then applying $\text{(*)}$ twice,</p>
<p>$$\frac{S(r)}{C(r)}
= \frac{(\pi/2)^{r+1} - C(r+1)}{S(r+1)}
= (r+2) \frac{(\pi/2)^{r+1} - C(r+1)}{(\pi/2)^{r+2}- C(r+2)}.$$</p>
<p>Dividing both the numerator and the denominator by $(\pi/2)^{r+1}$ and utilizing our crude bound,</p>
<p>$$\frac{S(r)}{C(r)}
= (r+2) \frac{1 + \mathcal{O}(r^{-1})}{(\pi/2) + \mathcal{O}(r^{-1})}. $$</p>
<p>Therefore</p>
<p>$$ \lim_{r\to\infty} r^{-1} \frac{S(r)}{C(r)} = \frac{2}{\pi} $$</p>
<p>and $c = -1$ is only the possible choice.</p>
Answer 8: <p>Show that
$$\int_0^{\pi/2}x^r \sin(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \sin(\pi t/2)dt=(\frac{\pi}{2})^{r+1}a_r$$</p>
<p>And integrating by parts
$$a_r=\frac{1}{r+1}-\frac{\pi}{2(r+1)}\int_0^1t^{r+1}\cos(\pi t/2)dt$$</p>
<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\to +\infty$.</p>
<p>In the same way:</p>
<p>$$\int_0^{\pi/2}x^r \cos(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \cos(\pi t/2)dt=(\frac{\pi}{2})^{r+1}b_r$$</p>
<p>and we have $b_r=\frac{\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>
Answer 9: <p>In fact, integration by parts works in this case. For convenience, we write</p>
<p>$$ S(r) := \int_{0}^{\pi/2} x^r \sin x \, dx, \qquad C(r) := \int_{0}^{\pi/2} x^r \cos x \, dx. $$</p>
<hr>
<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \frac{\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>
<p>$$ S(r) \approx \int_{0}^{\pi/2} x^r \, dx = \frac{(\pi/2)^{r+1}}{r+1}$$</p>
<p>and</p>
<p>$$ C(r) \approx \int_{0}^{\pi/2} x^r \left(\frac{\pi}{2} - x\right) \, dx
= \frac{(\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>
<p>This forces that $c = -1$.</p>
<hr>
<p><strong>Solution.</strong> We make the following two observations:</p>
<ul>
<li><p>Applying integration by parts, for $r > -1$ we have
$$ C(r) = \frac{1}{r+1}S(r+1), \qquad S(r) = \frac{(\pi/2)^{r+1}}{r+1} - \frac{1}{r+1}C(r+1). \tag{*} $$</p></li>
<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound
$$ |S(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}, \qquad |C(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}. $$</p></li>
</ul>
<p>Thus if $r \gg 1$, then applying $\text{(*)}$ twice,</p>
<p>$$\frac{S(r)}{C(r)}
= \frac{(\pi/2)^{r+1} - C(r+1)}{S(r+1)}
= (r+2) \frac{(\pi/2)^{r+1} - C(r+1)}{(\pi/2)^{r+2}- C(r+2)}.$$</p>
<p>Dividing both the numerator and the denominator by $(\pi/2)^{r+1}$ and utilizing our crude bound,</p>
<p>$$\frac{S(r)}{C(r)}
= (r+2) \frac{1 + \mathcal{O}(r^{-1})}{(\pi/2) + \mathcal{O}(r^{-1})}. $$</p>
<p>Therefore</p>
<p>$$ \lim_{r\to\infty} r^{-1} \frac{S(r)}{C(r)} = \frac{2}{\pi} $$</p>
<p>and $c = -1$ is only the possible choice.</p>
Answer 10: <p>Show that
$$\int_0^{\pi/2}x^r \sin(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \sin(\pi t/2)dt=(\frac{\pi}{2})^{r+1}a_r$$</p>
<p>And integrating by parts
$$a_r=\frac{1}{r+1}-\frac{\pi}{2(r+1)}\int_0^1t^{r+1}\cos(\pi t/2)dt$$</p>
<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\to +\infty$.</p>
<p>In the same way:</p>
<p>$$\int_0^{\pi/2}x^r \cos(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \cos(\pi t/2)dt=(\frac{\pi}{2})^{r+1}b_r$$</p>
<p>and we have $b_r=\frac{\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>
Answer 11: <p>In fact, integration by parts works in this case. For convenience, we write</p>
<p>$$ S(r) := \int_{0}^{\pi/2} x^r \sin x \, dx, \qquad C(r) := \int_{0}^{\pi/2} x^r \cos x \, dx. $$</p>
<hr>
<p><strong>Idea.</strong> The intuition is that, since $x^r$ is largest at $x = \frac{\pi}{2}$, the behavior of integrand near this point will determine the growth speed of $S(r)$ and $C(r)$. Thus loosely speaking</p>
<p>$$ S(r) \approx \int_{0}^{\pi/2} x^r \, dx = \frac{(\pi/2)^{r+1}}{r+1}$$</p>
<p>and</p>
<p>$$ C(r) \approx \int_{0}^{\pi/2} x^r \left(\frac{\pi}{2} - x\right) \, dx
= \frac{(\pi/2)^{r+2}}{(r+1)(r+2)}. $$</p>
<p>This forces that $c = -1$.</p>
<hr>
<p><strong>Solution.</strong> We make the following two observations:</p>
<ul>
<li><p>Applying integration by parts, for $r > -1$ we have
$$ C(r) = \frac{1}{r+1}S(r+1), \qquad S(r) = \frac{(\pi/2)^{r+1}}{r+1} - \frac{1}{r+1}C(r+1). \tag{*} $$</p></li>
<li><p>Using the fact that sine and cosine are bounded by $1$, we have the following crude bound
$$ |S(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}, \qquad |C(r)| \leq \frac{(\pi/2)^{r+1}}{r+1}. $$</p></li>
</ul>
<p>Thus if $r \gg 1$, then applying $\text{(*)}$ twice,</p>
<p>$$\frac{S(r)}{C(r)}
= \frac{(\pi/2)^{r+1} - C(r+1)}{S(r+1)}
= (r+2) \frac{(\pi/2)^{r+1} - C(r+1)}{(\pi/2)^{r+2}- C(r+2)}.$$</p>
<p>Dividing both the numerator and the denominator by $(\pi/2)^{r+1}$ and utilizing our crude bound,</p>
<p>$$\frac{S(r)}{C(r)}
= (r+2) \frac{1 + \mathcal{O}(r^{-1})}{(\pi/2) + \mathcal{O}(r^{-1})}. $$</p>
<p>Therefore</p>
<p>$$ \lim_{r\to\infty} r^{-1} \frac{S(r)}{C(r)} = \frac{2}{\pi} $$</p>
<p>and $c = -1$ is only the possible choice.</p>
Answer 12: <p>Show that
$$\int_0^{\pi/2}x^r \sin(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \sin(\pi t/2)dt=(\frac{\pi}{2})^{r+1}a_r$$</p>
<p>And integrating by parts
$$a_r=\frac{1}{r+1}-\frac{\pi}{2(r+1)}\int_0^1t^{r+1}\cos(\pi t/2)dt$$</p>
<p>It is now easy to see that $a_r$ is equivalent to $1/(r+1)$ as $r\to +\infty$.</p>
<p>In the same way:</p>
<p>$$\int_0^{\pi/2}x^r \cos(x)dx=(\frac{\pi}{2})^{r+1}\int_0^1t^r \cos(\pi t/2)dt=(\frac{\pi}{2})^{r+1}b_r$$</p>
<p>and we have $b_r=\frac{\pi}{2(r+1)}a_{r+1}$. It is easy to finish. </p>
|
Correct way to learn number theory? | 2,105,807 | 5 | https://math.stackexchange.com/questions/2105807/correct-way-to-learn-number-theory | <p><strong>Background</strong> <br></p>
<p>I am a college pass out. I have done my B.Tech in computer science and masters in quantum information. I am soon going to take up an industrial job. In the free time I got, after passing out from my college till now, ( few months ) I started reading Sipser's book on "Complexity and Computability theory". After reading it I wanted to learn more about it so I turned towards the book by Boaz and Barak on "Complexity theory". When I reached the chapter on "cryptography", I realized I did not know the fundamentals of number theory, so I started reading "Theory of numbers" by Hardy and Wright.</p>
<p><strong>Question</strong></p>
<p>I have read about one-third of Hardy and Wright. It got my really interested in elementary number theory. I am planning to read the complete book and pick it up as my hobby. The problem I am facing is, although till know I am able to follows the proofs on my own, I keep forgetting what I read earlier. For example I am currently reading about continued fractions and approximations of irrationals by rationals. But I don't remember the proofs, tricks, topic etc in detail that I read before it ( although I can always go back and remind myself ). As I read further I only remember the current topic that I read. And I feel if I read the book completely, at the end I would only remember the broad topics and only get a broader view. Also I try to read couple of pages everyday, but there are breaks of long number of days where I go without reading a single page. And when I again start reading I tend to forget even the current topic.</p>
<p>So I wanted to ask what is the correct way to learn number theory ? Should I read Hardy and Wright completely ( I have still to learn of many other topics like pell's equations, diophantine equations etc. ) ? Is there some other more practical way of learning number theory as a hobby ? How do I reach the threshold point of learning elementary number theory ?</p>
| [
{
"answer_id": 2105836,
"score": 7,
"text": "<p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>\n"
},
{
"answer_id": 2105835,
"score": 5,
"text": "<p>As one of my former lecturer's was fond of saying \"mathematics is not a spectator sport\". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>\n\n<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>\n\n<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>\n\n<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>\n"
},
{
"answer_id": 2105836,
"score": 7,
"text": "<p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>\n"
},
{
"answer_id": 2105835,
"score": 5,
"text": "<p>As one of my former lecturer's was fond of saying \"mathematics is not a spectator sport\". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>\n\n<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>\n\n<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>\n\n<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>\n"
},
{
"answer_id": 2105836,
"score": 7,
"text": "<p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>\n"
},
{
"answer_id": 2105835,
"score": 5,
"text": "<p>As one of my former lecturer's was fond of saying \"mathematics is not a spectator sport\". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>\n\n<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>\n\n<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>\n\n<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>\n"
},
{
"answer_id": 2105836,
"score": 7,
"text": "<p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>\n"
},
{
"answer_id": 2105835,
"score": 5,
"text": "<p>As one of my former lecturer's was fond of saying \"mathematics is not a spectator sport\". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>\n\n<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>\n\n<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>\n\n<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>\n"
},
{
"answer_id": 2105836,
"score": 7,
"text": "<p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>\n"
},
{
"answer_id": 2105835,
"score": 5,
"text": "<p>As one of my former lecturer's was fond of saying \"mathematics is not a spectator sport\". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>\n\n<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>\n\n<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>\n\n<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>\n"
},
{
"answer_id": 2105836,
"score": 7,
"text": "<p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>\n"
},
{
"answer_id": 2105835,
"score": 5,
"text": "<p>As one of my former lecturer's was fond of saying \"mathematics is not a spectator sport\". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>\n\n<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>\n\n<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>\n\n<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>\n"
}
] | Question: Correct way to learn number theory?
Answer 1: <p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>
Answer 2: <p>As one of my former lecturer's was fond of saying "mathematics is not a spectator sport". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>
<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>
<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>
<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>
Answer 3: <p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>
Answer 4: <p>As one of my former lecturer's was fond of saying "mathematics is not a spectator sport". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>
<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>
<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>
<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>
Answer 5: <p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>
Answer 6: <p>As one of my former lecturer's was fond of saying "mathematics is not a spectator sport". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>
<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>
<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>
<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>
Answer 7: <p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>
Answer 8: <p>As one of my former lecturer's was fond of saying "mathematics is not a spectator sport". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>
<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>
<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>
<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>
Answer 9: <p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>
Answer 10: <p>As one of my former lecturer's was fond of saying "mathematics is not a spectator sport". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>
<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>
<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>
<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>
Answer 11: <p>I would not recommend continuing with Hardy and Wright. What you need is strength on the basics, not a tourist's view of a wide range of topics. I would focus on problem solving. Do lots of cool problems, taken from various sources, at a level that is reasonably challenging, but not way out of range, so as to develop your problem-solving and proof skills.</p>
Answer 12: <p>As one of my former lecturer's was fond of saying "mathematics is not a spectator sport". By this I mean that if you want to properly learn and understand it then you should get your hands dirty by doing some problems.</p>
<p>This will mean you have to look up definitions often to begin with until they soak in but eventually you also get spot the same tricks being used over and over again as you figure out how to adapt them and will help you see how different topics link together.</p>
<p>You could also try to attempt to prove some of the theorems before reading the proofs in the book as this will have a similar affect but they're not always constructive in teaching you how to actually apply the concept so a few problems to accompany it will still be worthwhile.</p>
<p>Doing it this way will of course take a lot longer but will definitely give you a deeper understanding and the time spent means that hopefully you won't forget all of it when you start the next chapter! Good luck reading.</p>
|
Limit without L'Hopital rule | 2,105,844 | 5 | https://math.stackexchange.com/questions/2105844/limit-without-lhopital-rule | <p>How to compute the limits for the following functions without using L'Hopital rule? </p>
<p>1) $\displaystyle\underset{x\to 0^{+}}{\lim} \frac{e^{1/x} + 2 e ^{-1/x} + \ln x}{3e^{1/x} + 5e^{-1/x} + 7\ln x} $</p>
<p>2) $\displaystyle\underset{x\to 0^{+}}{\lim} \frac{x+e^{-1/x}}{x-e^{-1/x}}$</p>
<p>I wouldn't know where to begin. Thank you very much. </p>
| [
{
"answer_id": 2105858,
"score": 3,
"text": "<p>For problem 1, I did this: </p>\n\n<p>Take out $e^{1/x}$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1+ 2\\,e^{-2/x}+e^{-1/x}\\ln x}{3 + 5\\,e^{-2/x} + 7\\,e^{-1/x}\\ln x}$$</p>\n\n<p>Now what? </p>\n\n<p>For problem 2, I did this: </p>\n\n<p>Substitute $y=1/x$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>\n\n<p>Now what? </p>\n\n<p>Thank you very much. </p>\n"
},
{
"answer_id": 2105874,
"score": 1,
"text": "<p>Following the hint for 1) you get</p>\n\n<p>$$\n\\require{cancel}\n\\lim_{x\\to 0^{+}} \\frac{\\cancel{e^{1/x}}}{\\cancel{e^{1/x}}}\\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x}\n$$</p>\n\n<p>And then we observe that, for $x\\to 0$, $e^{-1/x}$ wins over $\\ln x$</p>\n\n<p>$$\n\\left\\vert \\frac{\\ln x}{e^{1/x}} \\right\\vert \\leq \\left\\vert \\frac{x-1}{e^{1/x}} \\right\\vert \\underset{x\\to0}{\\longrightarrow} 0\n$$</p>\n\n<p>And I used a <a href=\"http://functions.wolfram.com/ElementaryFunctions/Log/29/\" rel=\"nofollow noreferrer\">well-known logarithm inequality</a>. So the limit is evaluates to</p>\n\n<p>$$\n\\lim_{x\\to 0^{+}} \\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x} = \\frac{1}{3}\n$$</p>\n\n<p>Because $e^{-2/x}\\to 0$ for $x\\to 0$.</p>\n\n<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>\n\n<p>$$\n\\frac{e^{-y}}{y}\\underset{y\\to\\infty}{\\longrightarrow} 0\n$$</p>\n\n<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>\n"
},
{
"answer_id": 2105858,
"score": 3,
"text": "<p>For problem 1, I did this: </p>\n\n<p>Take out $e^{1/x}$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1+ 2\\,e^{-2/x}+e^{-1/x}\\ln x}{3 + 5\\,e^{-2/x} + 7\\,e^{-1/x}\\ln x}$$</p>\n\n<p>Now what? </p>\n\n<p>For problem 2, I did this: </p>\n\n<p>Substitute $y=1/x$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>\n\n<p>Now what? </p>\n\n<p>Thank you very much. </p>\n"
},
{
"answer_id": 2105874,
"score": 1,
"text": "<p>Following the hint for 1) you get</p>\n\n<p>$$\n\\require{cancel}\n\\lim_{x\\to 0^{+}} \\frac{\\cancel{e^{1/x}}}{\\cancel{e^{1/x}}}\\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x}\n$$</p>\n\n<p>And then we observe that, for $x\\to 0$, $e^{-1/x}$ wins over $\\ln x$</p>\n\n<p>$$\n\\left\\vert \\frac{\\ln x}{e^{1/x}} \\right\\vert \\leq \\left\\vert \\frac{x-1}{e^{1/x}} \\right\\vert \\underset{x\\to0}{\\longrightarrow} 0\n$$</p>\n\n<p>And I used a <a href=\"http://functions.wolfram.com/ElementaryFunctions/Log/29/\" rel=\"nofollow noreferrer\">well-known logarithm inequality</a>. So the limit is evaluates to</p>\n\n<p>$$\n\\lim_{x\\to 0^{+}} \\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x} = \\frac{1}{3}\n$$</p>\n\n<p>Because $e^{-2/x}\\to 0$ for $x\\to 0$.</p>\n\n<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>\n\n<p>$$\n\\frac{e^{-y}}{y}\\underset{y\\to\\infty}{\\longrightarrow} 0\n$$</p>\n\n<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>\n"
},
{
"answer_id": 2105858,
"score": 3,
"text": "<p>For problem 1, I did this: </p>\n\n<p>Take out $e^{1/x}$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1+ 2\\,e^{-2/x}+e^{-1/x}\\ln x}{3 + 5\\,e^{-2/x} + 7\\,e^{-1/x}\\ln x}$$</p>\n\n<p>Now what? </p>\n\n<p>For problem 2, I did this: </p>\n\n<p>Substitute $y=1/x$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>\n\n<p>Now what? </p>\n\n<p>Thank you very much. </p>\n"
},
{
"answer_id": 2105874,
"score": 1,
"text": "<p>Following the hint for 1) you get</p>\n\n<p>$$\n\\require{cancel}\n\\lim_{x\\to 0^{+}} \\frac{\\cancel{e^{1/x}}}{\\cancel{e^{1/x}}}\\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x}\n$$</p>\n\n<p>And then we observe that, for $x\\to 0$, $e^{-1/x}$ wins over $\\ln x$</p>\n\n<p>$$\n\\left\\vert \\frac{\\ln x}{e^{1/x}} \\right\\vert \\leq \\left\\vert \\frac{x-1}{e^{1/x}} \\right\\vert \\underset{x\\to0}{\\longrightarrow} 0\n$$</p>\n\n<p>And I used a <a href=\"http://functions.wolfram.com/ElementaryFunctions/Log/29/\" rel=\"nofollow noreferrer\">well-known logarithm inequality</a>. So the limit is evaluates to</p>\n\n<p>$$\n\\lim_{x\\to 0^{+}} \\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x} = \\frac{1}{3}\n$$</p>\n\n<p>Because $e^{-2/x}\\to 0$ for $x\\to 0$.</p>\n\n<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>\n\n<p>$$\n\\frac{e^{-y}}{y}\\underset{y\\to\\infty}{\\longrightarrow} 0\n$$</p>\n\n<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>\n"
},
{
"answer_id": 2105858,
"score": 3,
"text": "<p>For problem 1, I did this: </p>\n\n<p>Take out $e^{1/x}$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1+ 2\\,e^{-2/x}+e^{-1/x}\\ln x}{3 + 5\\,e^{-2/x} + 7\\,e^{-1/x}\\ln x}$$</p>\n\n<p>Now what? </p>\n\n<p>For problem 2, I did this: </p>\n\n<p>Substitute $y=1/x$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>\n\n<p>Now what? </p>\n\n<p>Thank you very much. </p>\n"
},
{
"answer_id": 2105874,
"score": 1,
"text": "<p>Following the hint for 1) you get</p>\n\n<p>$$\n\\require{cancel}\n\\lim_{x\\to 0^{+}} \\frac{\\cancel{e^{1/x}}}{\\cancel{e^{1/x}}}\\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x}\n$$</p>\n\n<p>And then we observe that, for $x\\to 0$, $e^{-1/x}$ wins over $\\ln x$</p>\n\n<p>$$\n\\left\\vert \\frac{\\ln x}{e^{1/x}} \\right\\vert \\leq \\left\\vert \\frac{x-1}{e^{1/x}} \\right\\vert \\underset{x\\to0}{\\longrightarrow} 0\n$$</p>\n\n<p>And I used a <a href=\"http://functions.wolfram.com/ElementaryFunctions/Log/29/\" rel=\"nofollow noreferrer\">well-known logarithm inequality</a>. So the limit is evaluates to</p>\n\n<p>$$\n\\lim_{x\\to 0^{+}} \\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x} = \\frac{1}{3}\n$$</p>\n\n<p>Because $e^{-2/x}\\to 0$ for $x\\to 0$.</p>\n\n<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>\n\n<p>$$\n\\frac{e^{-y}}{y}\\underset{y\\to\\infty}{\\longrightarrow} 0\n$$</p>\n\n<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>\n"
},
{
"answer_id": 2105858,
"score": 3,
"text": "<p>For problem 1, I did this: </p>\n\n<p>Take out $e^{1/x}$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1+ 2\\,e^{-2/x}+e^{-1/x}\\ln x}{3 + 5\\,e^{-2/x} + 7\\,e^{-1/x}\\ln x}$$</p>\n\n<p>Now what? </p>\n\n<p>For problem 2, I did this: </p>\n\n<p>Substitute $y=1/x$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>\n\n<p>Now what? </p>\n\n<p>Thank you very much. </p>\n"
},
{
"answer_id": 2105874,
"score": 1,
"text": "<p>Following the hint for 1) you get</p>\n\n<p>$$\n\\require{cancel}\n\\lim_{x\\to 0^{+}} \\frac{\\cancel{e^{1/x}}}{\\cancel{e^{1/x}}}\\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x}\n$$</p>\n\n<p>And then we observe that, for $x\\to 0$, $e^{-1/x}$ wins over $\\ln x$</p>\n\n<p>$$\n\\left\\vert \\frac{\\ln x}{e^{1/x}} \\right\\vert \\leq \\left\\vert \\frac{x-1}{e^{1/x}} \\right\\vert \\underset{x\\to0}{\\longrightarrow} 0\n$$</p>\n\n<p>And I used a <a href=\"http://functions.wolfram.com/ElementaryFunctions/Log/29/\" rel=\"nofollow noreferrer\">well-known logarithm inequality</a>. So the limit is evaluates to</p>\n\n<p>$$\n\\lim_{x\\to 0^{+}} \\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x} = \\frac{1}{3}\n$$</p>\n\n<p>Because $e^{-2/x}\\to 0$ for $x\\to 0$.</p>\n\n<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>\n\n<p>$$\n\\frac{e^{-y}}{y}\\underset{y\\to\\infty}{\\longrightarrow} 0\n$$</p>\n\n<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>\n"
},
{
"answer_id": 2105858,
"score": 3,
"text": "<p>For problem 1, I did this: </p>\n\n<p>Take out $e^{1/x}$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1+ 2\\,e^{-2/x}+e^{-1/x}\\ln x}{3 + 5\\,e^{-2/x} + 7\\,e^{-1/x}\\ln x}$$</p>\n\n<p>Now what? </p>\n\n<p>For problem 2, I did this: </p>\n\n<p>Substitute $y=1/x$</p>\n\n<p>$$\\lim_{x\\to 0} \\frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>\n\n<p>Now what? </p>\n\n<p>Thank you very much. </p>\n"
},
{
"answer_id": 2105874,
"score": 1,
"text": "<p>Following the hint for 1) you get</p>\n\n<p>$$\n\\require{cancel}\n\\lim_{x\\to 0^{+}} \\frac{\\cancel{e^{1/x}}}{\\cancel{e^{1/x}}}\\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x}\n$$</p>\n\n<p>And then we observe that, for $x\\to 0$, $e^{-1/x}$ wins over $\\ln x$</p>\n\n<p>$$\n\\left\\vert \\frac{\\ln x}{e^{1/x}} \\right\\vert \\leq \\left\\vert \\frac{x-1}{e^{1/x}} \\right\\vert \\underset{x\\to0}{\\longrightarrow} 0\n$$</p>\n\n<p>And I used a <a href=\"http://functions.wolfram.com/ElementaryFunctions/Log/29/\" rel=\"nofollow noreferrer\">well-known logarithm inequality</a>. So the limit is evaluates to</p>\n\n<p>$$\n\\lim_{x\\to 0^{+}} \\frac{1 + 2e^{-2/x} + e^{-1/x}\\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\\ln x} = \\frac{1}{3}\n$$</p>\n\n<p>Because $e^{-2/x}\\to 0$ for $x\\to 0$.</p>\n\n<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>\n\n<p>$$\n\\frac{e^{-y}}{y}\\underset{y\\to\\infty}{\\longrightarrow} 0\n$$</p>\n\n<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>\n"
}
] | Question: Limit without L'Hopital rule
Answer 1: <p>For problem 1, I did this: </p>
<p>Take out $e^{1/x}$</p>
<p>$$\lim_{x\to 0} \frac{1+ 2\,e^{-2/x}+e^{-1/x}\ln x}{3 + 5\,e^{-2/x} + 7\,e^{-1/x}\ln x}$$</p>
<p>Now what? </p>
<p>For problem 2, I did this: </p>
<p>Substitute $y=1/x$</p>
<p>$$\lim_{x\to 0} \frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>
<p>Now what? </p>
<p>Thank you very much. </p>
Answer 2: <p>Following the hint for 1) you get</p>
<p>$$
\require{cancel}
\lim_{x\to 0^{+}} \frac{\cancel{e^{1/x}}}{\cancel{e^{1/x}}}\frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x}
$$</p>
<p>And then we observe that, for $x\to 0$, $e^{-1/x}$ wins over $\ln x$</p>
<p>$$
\left\vert \frac{\ln x}{e^{1/x}} \right\vert \leq \left\vert \frac{x-1}{e^{1/x}} \right\vert \underset{x\to0}{\longrightarrow} 0
$$</p>
<p>And I used a <a href="http://functions.wolfram.com/ElementaryFunctions/Log/29/" rel="nofollow noreferrer">well-known logarithm inequality</a>. So the limit is evaluates to</p>
<p>$$
\lim_{x\to 0^{+}} \frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x} = \frac{1}{3}
$$</p>
<p>Because $e^{-2/x}\to 0$ for $x\to 0$.</p>
<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>
<p>$$
\frac{e^{-y}}{y}\underset{y\to\infty}{\longrightarrow} 0
$$</p>
<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>
Answer 3: <p>For problem 1, I did this: </p>
<p>Take out $e^{1/x}$</p>
<p>$$\lim_{x\to 0} \frac{1+ 2\,e^{-2/x}+e^{-1/x}\ln x}{3 + 5\,e^{-2/x} + 7\,e^{-1/x}\ln x}$$</p>
<p>Now what? </p>
<p>For problem 2, I did this: </p>
<p>Substitute $y=1/x$</p>
<p>$$\lim_{x\to 0} \frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>
<p>Now what? </p>
<p>Thank you very much. </p>
Answer 4: <p>Following the hint for 1) you get</p>
<p>$$
\require{cancel}
\lim_{x\to 0^{+}} \frac{\cancel{e^{1/x}}}{\cancel{e^{1/x}}}\frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x}
$$</p>
<p>And then we observe that, for $x\to 0$, $e^{-1/x}$ wins over $\ln x$</p>
<p>$$
\left\vert \frac{\ln x}{e^{1/x}} \right\vert \leq \left\vert \frac{x-1}{e^{1/x}} \right\vert \underset{x\to0}{\longrightarrow} 0
$$</p>
<p>And I used a <a href="http://functions.wolfram.com/ElementaryFunctions/Log/29/" rel="nofollow noreferrer">well-known logarithm inequality</a>. So the limit is evaluates to</p>
<p>$$
\lim_{x\to 0^{+}} \frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x} = \frac{1}{3}
$$</p>
<p>Because $e^{-2/x}\to 0$ for $x\to 0$.</p>
<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>
<p>$$
\frac{e^{-y}}{y}\underset{y\to\infty}{\longrightarrow} 0
$$</p>
<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>
Answer 5: <p>For problem 1, I did this: </p>
<p>Take out $e^{1/x}$</p>
<p>$$\lim_{x\to 0} \frac{1+ 2\,e^{-2/x}+e^{-1/x}\ln x}{3 + 5\,e^{-2/x} + 7\,e^{-1/x}\ln x}$$</p>
<p>Now what? </p>
<p>For problem 2, I did this: </p>
<p>Substitute $y=1/x$</p>
<p>$$\lim_{x\to 0} \frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>
<p>Now what? </p>
<p>Thank you very much. </p>
Answer 6: <p>Following the hint for 1) you get</p>
<p>$$
\require{cancel}
\lim_{x\to 0^{+}} \frac{\cancel{e^{1/x}}}{\cancel{e^{1/x}}}\frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x}
$$</p>
<p>And then we observe that, for $x\to 0$, $e^{-1/x}$ wins over $\ln x$</p>
<p>$$
\left\vert \frac{\ln x}{e^{1/x}} \right\vert \leq \left\vert \frac{x-1}{e^{1/x}} \right\vert \underset{x\to0}{\longrightarrow} 0
$$</p>
<p>And I used a <a href="http://functions.wolfram.com/ElementaryFunctions/Log/29/" rel="nofollow noreferrer">well-known logarithm inequality</a>. So the limit is evaluates to</p>
<p>$$
\lim_{x\to 0^{+}} \frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x} = \frac{1}{3}
$$</p>
<p>Because $e^{-2/x}\to 0$ for $x\to 0$.</p>
<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>
<p>$$
\frac{e^{-y}}{y}\underset{y\to\infty}{\longrightarrow} 0
$$</p>
<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>
Answer 7: <p>For problem 1, I did this: </p>
<p>Take out $e^{1/x}$</p>
<p>$$\lim_{x\to 0} \frac{1+ 2\,e^{-2/x}+e^{-1/x}\ln x}{3 + 5\,e^{-2/x} + 7\,e^{-1/x}\ln x}$$</p>
<p>Now what? </p>
<p>For problem 2, I did this: </p>
<p>Substitute $y=1/x$</p>
<p>$$\lim_{x\to 0} \frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>
<p>Now what? </p>
<p>Thank you very much. </p>
Answer 8: <p>Following the hint for 1) you get</p>
<p>$$
\require{cancel}
\lim_{x\to 0^{+}} \frac{\cancel{e^{1/x}}}{\cancel{e^{1/x}}}\frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x}
$$</p>
<p>And then we observe that, for $x\to 0$, $e^{-1/x}$ wins over $\ln x$</p>
<p>$$
\left\vert \frac{\ln x}{e^{1/x}} \right\vert \leq \left\vert \frac{x-1}{e^{1/x}} \right\vert \underset{x\to0}{\longrightarrow} 0
$$</p>
<p>And I used a <a href="http://functions.wolfram.com/ElementaryFunctions/Log/29/" rel="nofollow noreferrer">well-known logarithm inequality</a>. So the limit is evaluates to</p>
<p>$$
\lim_{x\to 0^{+}} \frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x} = \frac{1}{3}
$$</p>
<p>Because $e^{-2/x}\to 0$ for $x\to 0$.</p>
<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>
<p>$$
\frac{e^{-y}}{y}\underset{y\to\infty}{\longrightarrow} 0
$$</p>
<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>
Answer 9: <p>For problem 1, I did this: </p>
<p>Take out $e^{1/x}$</p>
<p>$$\lim_{x\to 0} \frac{1+ 2\,e^{-2/x}+e^{-1/x}\ln x}{3 + 5\,e^{-2/x} + 7\,e^{-1/x}\ln x}$$</p>
<p>Now what? </p>
<p>For problem 2, I did this: </p>
<p>Substitute $y=1/x$</p>
<p>$$\lim_{x\to 0} \frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>
<p>Now what? </p>
<p>Thank you very much. </p>
Answer 10: <p>Following the hint for 1) you get</p>
<p>$$
\require{cancel}
\lim_{x\to 0^{+}} \frac{\cancel{e^{1/x}}}{\cancel{e^{1/x}}}\frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x}
$$</p>
<p>And then we observe that, for $x\to 0$, $e^{-1/x}$ wins over $\ln x$</p>
<p>$$
\left\vert \frac{\ln x}{e^{1/x}} \right\vert \leq \left\vert \frac{x-1}{e^{1/x}} \right\vert \underset{x\to0}{\longrightarrow} 0
$$</p>
<p>And I used a <a href="http://functions.wolfram.com/ElementaryFunctions/Log/29/" rel="nofollow noreferrer">well-known logarithm inequality</a>. So the limit is evaluates to</p>
<p>$$
\lim_{x\to 0^{+}} \frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x} = \frac{1}{3}
$$</p>
<p>Because $e^{-2/x}\to 0$ for $x\to 0$.</p>
<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>
<p>$$
\frac{e^{-y}}{y}\underset{y\to\infty}{\longrightarrow} 0
$$</p>
<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>
Answer 11: <p>For problem 1, I did this: </p>
<p>Take out $e^{1/x}$</p>
<p>$$\lim_{x\to 0} \frac{1+ 2\,e^{-2/x}+e^{-1/x}\ln x}{3 + 5\,e^{-2/x} + 7\,e^{-1/x}\ln x}$$</p>
<p>Now what? </p>
<p>For problem 2, I did this: </p>
<p>Substitute $y=1/x$</p>
<p>$$\lim_{x\to 0} \frac{1/y + e^{-y}}{1/y - e^{-y}} $$</p>
<p>Now what? </p>
<p>Thank you very much. </p>
Answer 12: <p>Following the hint for 1) you get</p>
<p>$$
\require{cancel}
\lim_{x\to 0^{+}} \frac{\cancel{e^{1/x}}}{\cancel{e^{1/x}}}\frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x}
$$</p>
<p>And then we observe that, for $x\to 0$, $e^{-1/x}$ wins over $\ln x$</p>
<p>$$
\left\vert \frac{\ln x}{e^{1/x}} \right\vert \leq \left\vert \frac{x-1}{e^{1/x}} \right\vert \underset{x\to0}{\longrightarrow} 0
$$</p>
<p>And I used a <a href="http://functions.wolfram.com/ElementaryFunctions/Log/29/" rel="nofollow noreferrer">well-known logarithm inequality</a>. So the limit is evaluates to</p>
<p>$$
\lim_{x\to 0^{+}} \frac{1 + 2e^{-2/x} + e^{-1/x}\ln x}{3 + 5e^{-2/x} + 7e^{-1/x}\ln x} = \frac{1}{3}
$$</p>
<p>Because $e^{-2/x}\to 0$ for $x\to 0$.</p>
<p>For problem 2), following from your answer, do the same trick: get $y$ out of the fraction, and then </p>
<p>$$
\frac{e^{-y}}{y}\underset{y\to\infty}{\longrightarrow} 0
$$</p>
<p>(This is well-known, no inequalities involved) So that the limit for 2) is 1.</p>
|
Probability of getting an odd number of heads if n biased coins are tossed once. | 2,105,857 | 5 | https://math.stackexchange.com/questions/2105857/probability-of-getting-an-odd-number-of-heads-if-n-biased-coins-are-tossed-once | <p>The question is basically to find out the probability of getting odd number of heads when $ n$ biased coins,with $m^{th}$ coin having probability of throwing head equal to $\frac{1}{2m+1}$ ($m=1,2,\cdots,n$) are tossed once.The results for each coin are independent.</p>
<p>If we consider first that only one head turns up.The probability then is equal to $$\sum_{m=1}^{n} [\frac{1}{2m+1} \prod_{k=1,k \neq m}^{n} (1-\frac{1}{2k+1})]$$ which seems very difficult to evaluate.It gets more complicated if we increase the number of heads.I couldnot find out a simple way to do this.Any suggestion would be highly appreciated.Thanks.</p>
| [
{
"answer_id": 2105873,
"score": 5,
"text": "<p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\\times \\left(1-\\frac 1{2n+3}\\right)+(1-P_n)\\times \\frac 1{2n+3}=P_n\\times \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}$$</p>\n\n<p>This can be solved in closed form. We get $$P_n=\\frac n{2n+1}$$</p>\n\n<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>\n"
},
{
"answer_id": 2105946,
"score": 5,
"text": "<p>A note for solving lulu's recursion </p>\n\n<p>We have $$P_{n+1}=P_n \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}= P_n \\frac{2n+1}{2n+3} + \\frac 1{2n+3}$$</p>\n\n<p>with $P_0 =0$</p>\n\n<p>Multiplying by $2n+3$:</p>\n\n<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \\left({2n+1}\\right)+1$$</p>\n\n<p>Calling $A_n = P_n (2n+1)$ this gives</p>\n\n<p>$$A_{n+1}=A_n+1$$</p>\n\n<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>\n\n<p>$$ P_n = \\frac{n}{2n+1}$$</p>\n"
},
{
"answer_id": 2105873,
"score": 5,
"text": "<p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\\times \\left(1-\\frac 1{2n+3}\\right)+(1-P_n)\\times \\frac 1{2n+3}=P_n\\times \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}$$</p>\n\n<p>This can be solved in closed form. We get $$P_n=\\frac n{2n+1}$$</p>\n\n<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>\n"
},
{
"answer_id": 2105946,
"score": 5,
"text": "<p>A note for solving lulu's recursion </p>\n\n<p>We have $$P_{n+1}=P_n \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}= P_n \\frac{2n+1}{2n+3} + \\frac 1{2n+3}$$</p>\n\n<p>with $P_0 =0$</p>\n\n<p>Multiplying by $2n+3$:</p>\n\n<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \\left({2n+1}\\right)+1$$</p>\n\n<p>Calling $A_n = P_n (2n+1)$ this gives</p>\n\n<p>$$A_{n+1}=A_n+1$$</p>\n\n<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>\n\n<p>$$ P_n = \\frac{n}{2n+1}$$</p>\n"
},
{
"answer_id": 2105873,
"score": 5,
"text": "<p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\\times \\left(1-\\frac 1{2n+3}\\right)+(1-P_n)\\times \\frac 1{2n+3}=P_n\\times \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}$$</p>\n\n<p>This can be solved in closed form. We get $$P_n=\\frac n{2n+1}$$</p>\n\n<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>\n"
},
{
"answer_id": 2105946,
"score": 5,
"text": "<p>A note for solving lulu's recursion </p>\n\n<p>We have $$P_{n+1}=P_n \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}= P_n \\frac{2n+1}{2n+3} + \\frac 1{2n+3}$$</p>\n\n<p>with $P_0 =0$</p>\n\n<p>Multiplying by $2n+3$:</p>\n\n<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \\left({2n+1}\\right)+1$$</p>\n\n<p>Calling $A_n = P_n (2n+1)$ this gives</p>\n\n<p>$$A_{n+1}=A_n+1$$</p>\n\n<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>\n\n<p>$$ P_n = \\frac{n}{2n+1}$$</p>\n"
},
{
"answer_id": 2105873,
"score": 5,
"text": "<p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\\times \\left(1-\\frac 1{2n+3}\\right)+(1-P_n)\\times \\frac 1{2n+3}=P_n\\times \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}$$</p>\n\n<p>This can be solved in closed form. We get $$P_n=\\frac n{2n+1}$$</p>\n\n<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>\n"
},
{
"answer_id": 2105946,
"score": 5,
"text": "<p>A note for solving lulu's recursion </p>\n\n<p>We have $$P_{n+1}=P_n \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}= P_n \\frac{2n+1}{2n+3} + \\frac 1{2n+3}$$</p>\n\n<p>with $P_0 =0$</p>\n\n<p>Multiplying by $2n+3$:</p>\n\n<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \\left({2n+1}\\right)+1$$</p>\n\n<p>Calling $A_n = P_n (2n+1)$ this gives</p>\n\n<p>$$A_{n+1}=A_n+1$$</p>\n\n<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>\n\n<p>$$ P_n = \\frac{n}{2n+1}$$</p>\n"
},
{
"answer_id": 2105873,
"score": 5,
"text": "<p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\\times \\left(1-\\frac 1{2n+3}\\right)+(1-P_n)\\times \\frac 1{2n+3}=P_n\\times \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}$$</p>\n\n<p>This can be solved in closed form. We get $$P_n=\\frac n{2n+1}$$</p>\n\n<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>\n"
},
{
"answer_id": 2105946,
"score": 5,
"text": "<p>A note for solving lulu's recursion </p>\n\n<p>We have $$P_{n+1}=P_n \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}= P_n \\frac{2n+1}{2n+3} + \\frac 1{2n+3}$$</p>\n\n<p>with $P_0 =0$</p>\n\n<p>Multiplying by $2n+3$:</p>\n\n<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \\left({2n+1}\\right)+1$$</p>\n\n<p>Calling $A_n = P_n (2n+1)$ this gives</p>\n\n<p>$$A_{n+1}=A_n+1$$</p>\n\n<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>\n\n<p>$$ P_n = \\frac{n}{2n+1}$$</p>\n"
},
{
"answer_id": 2105873,
"score": 5,
"text": "<p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\\times \\left(1-\\frac 1{2n+3}\\right)+(1-P_n)\\times \\frac 1{2n+3}=P_n\\times \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}$$</p>\n\n<p>This can be solved in closed form. We get $$P_n=\\frac n{2n+1}$$</p>\n\n<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>\n"
},
{
"answer_id": 2105946,
"score": 5,
"text": "<p>A note for solving lulu's recursion </p>\n\n<p>We have $$P_{n+1}=P_n \\left(1-\\frac 2{2n+3}\\right)+\\frac 1{2n+3}= P_n \\frac{2n+1}{2n+3} + \\frac 1{2n+3}$$</p>\n\n<p>with $P_0 =0$</p>\n\n<p>Multiplying by $2n+3$:</p>\n\n<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \\left({2n+1}\\right)+1$$</p>\n\n<p>Calling $A_n = P_n (2n+1)$ this gives</p>\n\n<p>$$A_{n+1}=A_n+1$$</p>\n\n<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>\n\n<p>$$ P_n = \\frac{n}{2n+1}$$</p>\n"
}
] | Question: Probability of getting an odd number of heads if n biased coins are tossed once.
Answer 1: <p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\times \left(1-\frac 1{2n+3}\right)+(1-P_n)\times \frac 1{2n+3}=P_n\times \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}$$</p>
<p>This can be solved in closed form. We get $$P_n=\frac n{2n+1}$$</p>
<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>
Answer 2: <p>A note for solving lulu's recursion </p>
<p>We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$</p>
<p>with $P_0 =0$</p>
<p>Multiplying by $2n+3$:</p>
<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$</p>
<p>Calling $A_n = P_n (2n+1)$ this gives</p>
<p>$$A_{n+1}=A_n+1$$</p>
<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>
<p>$$ P_n = \frac{n}{2n+1}$$</p>
Answer 3: <p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\times \left(1-\frac 1{2n+3}\right)+(1-P_n)\times \frac 1{2n+3}=P_n\times \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}$$</p>
<p>This can be solved in closed form. We get $$P_n=\frac n{2n+1}$$</p>
<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>
Answer 4: <p>A note for solving lulu's recursion </p>
<p>We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$</p>
<p>with $P_0 =0$</p>
<p>Multiplying by $2n+3$:</p>
<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$</p>
<p>Calling $A_n = P_n (2n+1)$ this gives</p>
<p>$$A_{n+1}=A_n+1$$</p>
<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>
<p>$$ P_n = \frac{n}{2n+1}$$</p>
Answer 5: <p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\times \left(1-\frac 1{2n+3}\right)+(1-P_n)\times \frac 1{2n+3}=P_n\times \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}$$</p>
<p>This can be solved in closed form. We get $$P_n=\frac n{2n+1}$$</p>
<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>
Answer 6: <p>A note for solving lulu's recursion </p>
<p>We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$</p>
<p>with $P_0 =0$</p>
<p>Multiplying by $2n+3$:</p>
<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$</p>
<p>Calling $A_n = P_n (2n+1)$ this gives</p>
<p>$$A_{n+1}=A_n+1$$</p>
<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>
<p>$$ P_n = \frac{n}{2n+1}$$</p>
Answer 7: <p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\times \left(1-\frac 1{2n+3}\right)+(1-P_n)\times \frac 1{2n+3}=P_n\times \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}$$</p>
<p>This can be solved in closed form. We get $$P_n=\frac n{2n+1}$$</p>
<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>
Answer 8: <p>A note for solving lulu's recursion </p>
<p>We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$</p>
<p>with $P_0 =0$</p>
<p>Multiplying by $2n+3$:</p>
<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$</p>
<p>Calling $A_n = P_n (2n+1)$ this gives</p>
<p>$$A_{n+1}=A_n+1$$</p>
<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>
<p>$$ P_n = \frac{n}{2n+1}$$</p>
Answer 9: <p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\times \left(1-\frac 1{2n+3}\right)+(1-P_n)\times \frac 1{2n+3}=P_n\times \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}$$</p>
<p>This can be solved in closed form. We get $$P_n=\frac n{2n+1}$$</p>
<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>
Answer 10: <p>A note for solving lulu's recursion </p>
<p>We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$</p>
<p>with $P_0 =0$</p>
<p>Multiplying by $2n+3$:</p>
<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$</p>
<p>Calling $A_n = P_n (2n+1)$ this gives</p>
<p>$$A_{n+1}=A_n+1$$</p>
<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>
<p>$$ P_n = \frac{n}{2n+1}$$</p>
Answer 11: <p>We work recursively. Let $P_n$ denote the answer if there are $n$ coins. Of course $P_1=\frac 13$. Considering the toss of the last coin we see that $$P_{n+1}=P_n\times \left(1-\frac 1{2n+3}\right)+(1-P_n)\times \frac 1{2n+3}=P_n\times \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}$$</p>
<p>This can be solved in closed form. We get $$P_n=\frac n{2n+1}$$</p>
<p>Not a lot of insight to offer on that...I just looked at the table, saw the pattern, then mechanically showed that the expression satisfied the recursion. Of course, the simplicity of the final form suggests that there might be a direct argument for it.</p>
Answer 12: <p>A note for solving lulu's recursion </p>
<p>We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$</p>
<p>with $P_0 =0$</p>
<p>Multiplying by $2n+3$:</p>
<p>$$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$</p>
<p>Calling $A_n = P_n (2n+1)$ this gives</p>
<p>$$A_{n+1}=A_n+1$$</p>
<p>with $A_0=0$. Of course, this implies $A_n=n$ and hence </p>
<p>$$ P_n = \frac{n}{2n+1}$$</p>
|
Solutions to $|z+1-i\sqrt{3}|=|z-1+i\sqrt{3}|$ | 2,105,893 | 5 | https://math.stackexchange.com/questions/2105893/solutions-to-z1-i-sqrt3-z-1i-sqrt3 | <p>Find all the complex numbers $z$ satisfying
\begin{equation}
|z+1-i\sqrt{3}|=|z-1+i\sqrt{3}|
\end{equation}
I tried using $z=a+bi$, then using the formula for absolute value:
\begin{equation}
|(a+1)+i(b-\sqrt{3})|=|(a-1)+i(b+\sqrt{3})|\\
\sqrt{(a+1)^2+(b-\sqrt{3})^2}=\sqrt{(a-1)^2+(b+\sqrt{3})^2}
\end{equation}
Is this the right way to go? Should I solve for b or a, and in that case what does that tell me about possible solutions? Any help would be appreciated.</p>
<p>Update:
Solved the equation for $y$, got
\begin{equation}
y=\frac{x}{\sqrt{3}}
\end{equation}
Does this mean that the equation is satisfied for all complex numbers on the line $y=\frac{x}{\sqrt{3}}$?</p>
| [
{
"answer_id": 2105911,
"score": 6,
"text": "<p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>\n\n<p>$$ 4a - 4\\sqrt{3}b = 0 \\Rightarrow a = \\sqrt{3} b.$$</p>\n\n<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \\sqrt{3}b$, which in the complex plane is a straight line.</p>\n\n<p>This was to be expected, because the equation can be rewritten as\n$$ |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|,$$\nfrom where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\\sqrt{3})$ and $(1-i\\sqrt{3})$.</p>\n"
},
{
"answer_id": 2105926,
"score": 3,
"text": "<p>Write </p>\n\n<p>$|z-(-1+i\\sqrt{3})|=|z-(1-i\\sqrt{3})|$</p>\n\n<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\\sqrt{3}$ and $B=1-i\\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\\sqrt{3}x$.</p>\n"
},
{
"answer_id": 2105898,
"score": 2,
"text": "<p>Hint-</p>\n\n<p>You are on right way. Square both sides to find a, b. </p>\n"
},
{
"answer_id": 2105902,
"score": 2,
"text": "<p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>\n\n<p>For example let $w=1-i\\sqrt{3}$ then we can rewrite:</p>\n\n<p>$|z+w|=|z-w| \\implies |z+w|^2=|z-w|^2 \\implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>\n\n<p>This can be rewritten as:</p>\n\n<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$\n$$wz^*+zw^*=-wz^*-zw^* \\implies 2wz^*=-2zw^* \\implies wz^*=zw^*$$</p>\n\n<p>Now plug in $z = x+iy$ and $w=1-i\\sqrt{3}$. It should be easy to solve from here.</p>\n"
},
{
"answer_id": 2106443,
"score": 0,
"text": "<p>$$\n|z + 1 - i\\sqrt{3}| = |z - 1 + i\\sqrt{3}| \\\\\n\\implies |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|\n$$</p>\n\n<p>If I take $z_0 = 1 - i\\sqrt{3}$, then the equation</p>\n\n<p>$$\n|z - (-z_0)| = |z - z_0|\n$$</p>\n\n<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>\n\n<p>$$\nm_{PQ} = \\frac{\\sqrt{3} + \\sqrt{3}}{-1-1} = -\\sqrt{3}.\n$$</p>\n\n<p>Hence a line perpendicular to $PQ$ has a slope $1/\\sqrt{3}$. The required line is thus</p>\n\n<p>$$\ny = \\frac{x}{\\sqrt{3}} \\\\\n\\implies \\sqrt{3}y = x.\n$$</p>\n"
},
{
"answer_id": 2105911,
"score": 6,
"text": "<p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>\n\n<p>$$ 4a - 4\\sqrt{3}b = 0 \\Rightarrow a = \\sqrt{3} b.$$</p>\n\n<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \\sqrt{3}b$, which in the complex plane is a straight line.</p>\n\n<p>This was to be expected, because the equation can be rewritten as\n$$ |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|,$$\nfrom where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\\sqrt{3})$ and $(1-i\\sqrt{3})$.</p>\n"
},
{
"answer_id": 2105926,
"score": 3,
"text": "<p>Write </p>\n\n<p>$|z-(-1+i\\sqrt{3})|=|z-(1-i\\sqrt{3})|$</p>\n\n<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\\sqrt{3}$ and $B=1-i\\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\\sqrt{3}x$.</p>\n"
},
{
"answer_id": 2105898,
"score": 2,
"text": "<p>Hint-</p>\n\n<p>You are on right way. Square both sides to find a, b. </p>\n"
},
{
"answer_id": 2105902,
"score": 2,
"text": "<p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>\n\n<p>For example let $w=1-i\\sqrt{3}$ then we can rewrite:</p>\n\n<p>$|z+w|=|z-w| \\implies |z+w|^2=|z-w|^2 \\implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>\n\n<p>This can be rewritten as:</p>\n\n<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$\n$$wz^*+zw^*=-wz^*-zw^* \\implies 2wz^*=-2zw^* \\implies wz^*=zw^*$$</p>\n\n<p>Now plug in $z = x+iy$ and $w=1-i\\sqrt{3}$. It should be easy to solve from here.</p>\n"
},
{
"answer_id": 2106443,
"score": 0,
"text": "<p>$$\n|z + 1 - i\\sqrt{3}| = |z - 1 + i\\sqrt{3}| \\\\\n\\implies |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|\n$$</p>\n\n<p>If I take $z_0 = 1 - i\\sqrt{3}$, then the equation</p>\n\n<p>$$\n|z - (-z_0)| = |z - z_0|\n$$</p>\n\n<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>\n\n<p>$$\nm_{PQ} = \\frac{\\sqrt{3} + \\sqrt{3}}{-1-1} = -\\sqrt{3}.\n$$</p>\n\n<p>Hence a line perpendicular to $PQ$ has a slope $1/\\sqrt{3}$. The required line is thus</p>\n\n<p>$$\ny = \\frac{x}{\\sqrt{3}} \\\\\n\\implies \\sqrt{3}y = x.\n$$</p>\n"
},
{
"answer_id": 2105911,
"score": 6,
"text": "<p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>\n\n<p>$$ 4a - 4\\sqrt{3}b = 0 \\Rightarrow a = \\sqrt{3} b.$$</p>\n\n<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \\sqrt{3}b$, which in the complex plane is a straight line.</p>\n\n<p>This was to be expected, because the equation can be rewritten as\n$$ |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|,$$\nfrom where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\\sqrt{3})$ and $(1-i\\sqrt{3})$.</p>\n"
},
{
"answer_id": 2105926,
"score": 3,
"text": "<p>Write </p>\n\n<p>$|z-(-1+i\\sqrt{3})|=|z-(1-i\\sqrt{3})|$</p>\n\n<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\\sqrt{3}$ and $B=1-i\\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\\sqrt{3}x$.</p>\n"
},
{
"answer_id": 2105898,
"score": 2,
"text": "<p>Hint-</p>\n\n<p>You are on right way. Square both sides to find a, b. </p>\n"
},
{
"answer_id": 2105902,
"score": 2,
"text": "<p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>\n\n<p>For example let $w=1-i\\sqrt{3}$ then we can rewrite:</p>\n\n<p>$|z+w|=|z-w| \\implies |z+w|^2=|z-w|^2 \\implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>\n\n<p>This can be rewritten as:</p>\n\n<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$\n$$wz^*+zw^*=-wz^*-zw^* \\implies 2wz^*=-2zw^* \\implies wz^*=zw^*$$</p>\n\n<p>Now plug in $z = x+iy$ and $w=1-i\\sqrt{3}$. It should be easy to solve from here.</p>\n"
},
{
"answer_id": 2106443,
"score": 0,
"text": "<p>$$\n|z + 1 - i\\sqrt{3}| = |z - 1 + i\\sqrt{3}| \\\\\n\\implies |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|\n$$</p>\n\n<p>If I take $z_0 = 1 - i\\sqrt{3}$, then the equation</p>\n\n<p>$$\n|z - (-z_0)| = |z - z_0|\n$$</p>\n\n<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>\n\n<p>$$\nm_{PQ} = \\frac{\\sqrt{3} + \\sqrt{3}}{-1-1} = -\\sqrt{3}.\n$$</p>\n\n<p>Hence a line perpendicular to $PQ$ has a slope $1/\\sqrt{3}$. The required line is thus</p>\n\n<p>$$\ny = \\frac{x}{\\sqrt{3}} \\\\\n\\implies \\sqrt{3}y = x.\n$$</p>\n"
},
{
"answer_id": 2105911,
"score": 6,
"text": "<p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>\n\n<p>$$ 4a - 4\\sqrt{3}b = 0 \\Rightarrow a = \\sqrt{3} b.$$</p>\n\n<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \\sqrt{3}b$, which in the complex plane is a straight line.</p>\n\n<p>This was to be expected, because the equation can be rewritten as\n$$ |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|,$$\nfrom where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\\sqrt{3})$ and $(1-i\\sqrt{3})$.</p>\n"
},
{
"answer_id": 2105926,
"score": 3,
"text": "<p>Write </p>\n\n<p>$|z-(-1+i\\sqrt{3})|=|z-(1-i\\sqrt{3})|$</p>\n\n<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\\sqrt{3}$ and $B=1-i\\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\\sqrt{3}x$.</p>\n"
},
{
"answer_id": 2105898,
"score": 2,
"text": "<p>Hint-</p>\n\n<p>You are on right way. Square both sides to find a, b. </p>\n"
},
{
"answer_id": 2105902,
"score": 2,
"text": "<p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>\n\n<p>For example let $w=1-i\\sqrt{3}$ then we can rewrite:</p>\n\n<p>$|z+w|=|z-w| \\implies |z+w|^2=|z-w|^2 \\implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>\n\n<p>This can be rewritten as:</p>\n\n<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$\n$$wz^*+zw^*=-wz^*-zw^* \\implies 2wz^*=-2zw^* \\implies wz^*=zw^*$$</p>\n\n<p>Now plug in $z = x+iy$ and $w=1-i\\sqrt{3}$. It should be easy to solve from here.</p>\n"
},
{
"answer_id": 2106443,
"score": 0,
"text": "<p>$$\n|z + 1 - i\\sqrt{3}| = |z - 1 + i\\sqrt{3}| \\\\\n\\implies |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|\n$$</p>\n\n<p>If I take $z_0 = 1 - i\\sqrt{3}$, then the equation</p>\n\n<p>$$\n|z - (-z_0)| = |z - z_0|\n$$</p>\n\n<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>\n\n<p>$$\nm_{PQ} = \\frac{\\sqrt{3} + \\sqrt{3}}{-1-1} = -\\sqrt{3}.\n$$</p>\n\n<p>Hence a line perpendicular to $PQ$ has a slope $1/\\sqrt{3}$. The required line is thus</p>\n\n<p>$$\ny = \\frac{x}{\\sqrt{3}} \\\\\n\\implies \\sqrt{3}y = x.\n$$</p>\n"
},
{
"answer_id": 2105911,
"score": 6,
"text": "<p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>\n\n<p>$$ 4a - 4\\sqrt{3}b = 0 \\Rightarrow a = \\sqrt{3} b.$$</p>\n\n<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \\sqrt{3}b$, which in the complex plane is a straight line.</p>\n\n<p>This was to be expected, because the equation can be rewritten as\n$$ |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|,$$\nfrom where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\\sqrt{3})$ and $(1-i\\sqrt{3})$.</p>\n"
},
{
"answer_id": 2105926,
"score": 3,
"text": "<p>Write </p>\n\n<p>$|z-(-1+i\\sqrt{3})|=|z-(1-i\\sqrt{3})|$</p>\n\n<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\\sqrt{3}$ and $B=1-i\\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\\sqrt{3}x$.</p>\n"
},
{
"answer_id": 2105898,
"score": 2,
"text": "<p>Hint-</p>\n\n<p>You are on right way. Square both sides to find a, b. </p>\n"
},
{
"answer_id": 2105902,
"score": 2,
"text": "<p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>\n\n<p>For example let $w=1-i\\sqrt{3}$ then we can rewrite:</p>\n\n<p>$|z+w|=|z-w| \\implies |z+w|^2=|z-w|^2 \\implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>\n\n<p>This can be rewritten as:</p>\n\n<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$\n$$wz^*+zw^*=-wz^*-zw^* \\implies 2wz^*=-2zw^* \\implies wz^*=zw^*$$</p>\n\n<p>Now plug in $z = x+iy$ and $w=1-i\\sqrt{3}$. It should be easy to solve from here.</p>\n"
},
{
"answer_id": 2106443,
"score": 0,
"text": "<p>$$\n|z + 1 - i\\sqrt{3}| = |z - 1 + i\\sqrt{3}| \\\\\n\\implies |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|\n$$</p>\n\n<p>If I take $z_0 = 1 - i\\sqrt{3}$, then the equation</p>\n\n<p>$$\n|z - (-z_0)| = |z - z_0|\n$$</p>\n\n<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>\n\n<p>$$\nm_{PQ} = \\frac{\\sqrt{3} + \\sqrt{3}}{-1-1} = -\\sqrt{3}.\n$$</p>\n\n<p>Hence a line perpendicular to $PQ$ has a slope $1/\\sqrt{3}$. The required line is thus</p>\n\n<p>$$\ny = \\frac{x}{\\sqrt{3}} \\\\\n\\implies \\sqrt{3}y = x.\n$$</p>\n"
},
{
"answer_id": 2105911,
"score": 6,
"text": "<p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>\n\n<p>$$ 4a - 4\\sqrt{3}b = 0 \\Rightarrow a = \\sqrt{3} b.$$</p>\n\n<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \\sqrt{3}b$, which in the complex plane is a straight line.</p>\n\n<p>This was to be expected, because the equation can be rewritten as\n$$ |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|,$$\nfrom where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\\sqrt{3})$ and $(1-i\\sqrt{3})$.</p>\n"
},
{
"answer_id": 2105926,
"score": 3,
"text": "<p>Write </p>\n\n<p>$|z-(-1+i\\sqrt{3})|=|z-(1-i\\sqrt{3})|$</p>\n\n<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\\sqrt{3}$ and $B=1-i\\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\\sqrt{3}x$.</p>\n"
},
{
"answer_id": 2105898,
"score": 2,
"text": "<p>Hint-</p>\n\n<p>You are on right way. Square both sides to find a, b. </p>\n"
},
{
"answer_id": 2105902,
"score": 2,
"text": "<p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>\n\n<p>For example let $w=1-i\\sqrt{3}$ then we can rewrite:</p>\n\n<p>$|z+w|=|z-w| \\implies |z+w|^2=|z-w|^2 \\implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>\n\n<p>This can be rewritten as:</p>\n\n<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$\n$$wz^*+zw^*=-wz^*-zw^* \\implies 2wz^*=-2zw^* \\implies wz^*=zw^*$$</p>\n\n<p>Now plug in $z = x+iy$ and $w=1-i\\sqrt{3}$. It should be easy to solve from here.</p>\n"
},
{
"answer_id": 2106443,
"score": 0,
"text": "<p>$$\n|z + 1 - i\\sqrt{3}| = |z - 1 + i\\sqrt{3}| \\\\\n\\implies |z - (-1 + i\\sqrt{3})| = |z - (1 - i\\sqrt{3})|\n$$</p>\n\n<p>If I take $z_0 = 1 - i\\sqrt{3}$, then the equation</p>\n\n<p>$$\n|z - (-z_0)| = |z - z_0|\n$$</p>\n\n<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>\n\n<p>$$\nm_{PQ} = \\frac{\\sqrt{3} + \\sqrt{3}}{-1-1} = -\\sqrt{3}.\n$$</p>\n\n<p>Hence a line perpendicular to $PQ$ has a slope $1/\\sqrt{3}$. The required line is thus</p>\n\n<p>$$\ny = \\frac{x}{\\sqrt{3}} \\\\\n\\implies \\sqrt{3}y = x.\n$$</p>\n"
}
] | Question: Solutions to $|z+1-i\sqrt{3}|=|z-1+i\sqrt{3}|$
Answer 1: <p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>
<p>$$ 4a - 4\sqrt{3}b = 0 \Rightarrow a = \sqrt{3} b.$$</p>
<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \sqrt{3}b$, which in the complex plane is a straight line.</p>
<p>This was to be expected, because the equation can be rewritten as
$$ |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|,$$
from where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\sqrt{3})$ and $(1-i\sqrt{3})$.</p>
Answer 2: <p>Write </p>
<p>$|z-(-1+i\sqrt{3})|=|z-(1-i\sqrt{3})|$</p>
<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\sqrt{3}$ and $B=1-i\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\sqrt{3}x$.</p>
Answer 3: <p>Hint-</p>
<p>You are on right way. Square both sides to find a, b. </p>
Answer 4: <p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>
<p>For example let $w=1-i\sqrt{3}$ then we can rewrite:</p>
<p>$|z+w|=|z-w| \implies |z+w|^2=|z-w|^2 \implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>
<p>This can be rewritten as:</p>
<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$
$$wz^*+zw^*=-wz^*-zw^* \implies 2wz^*=-2zw^* \implies wz^*=zw^*$$</p>
<p>Now plug in $z = x+iy$ and $w=1-i\sqrt{3}$. It should be easy to solve from here.</p>
Answer 5: <p>$$
|z + 1 - i\sqrt{3}| = |z - 1 + i\sqrt{3}| \\
\implies |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|
$$</p>
<p>If I take $z_0 = 1 - i\sqrt{3}$, then the equation</p>
<p>$$
|z - (-z_0)| = |z - z_0|
$$</p>
<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>
<p>$$
m_{PQ} = \frac{\sqrt{3} + \sqrt{3}}{-1-1} = -\sqrt{3}.
$$</p>
<p>Hence a line perpendicular to $PQ$ has a slope $1/\sqrt{3}$. The required line is thus</p>
<p>$$
y = \frac{x}{\sqrt{3}} \\
\implies \sqrt{3}y = x.
$$</p>
Answer 6: <p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>
<p>$$ 4a - 4\sqrt{3}b = 0 \Rightarrow a = \sqrt{3} b.$$</p>
<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \sqrt{3}b$, which in the complex plane is a straight line.</p>
<p>This was to be expected, because the equation can be rewritten as
$$ |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|,$$
from where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\sqrt{3})$ and $(1-i\sqrt{3})$.</p>
Answer 7: <p>Write </p>
<p>$|z-(-1+i\sqrt{3})|=|z-(1-i\sqrt{3})|$</p>
<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\sqrt{3}$ and $B=1-i\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\sqrt{3}x$.</p>
Answer 8: <p>Hint-</p>
<p>You are on right way. Square both sides to find a, b. </p>
Answer 9: <p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>
<p>For example let $w=1-i\sqrt{3}$ then we can rewrite:</p>
<p>$|z+w|=|z-w| \implies |z+w|^2=|z-w|^2 \implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>
<p>This can be rewritten as:</p>
<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$
$$wz^*+zw^*=-wz^*-zw^* \implies 2wz^*=-2zw^* \implies wz^*=zw^*$$</p>
<p>Now plug in $z = x+iy$ and $w=1-i\sqrt{3}$. It should be easy to solve from here.</p>
Answer 10: <p>$$
|z + 1 - i\sqrt{3}| = |z - 1 + i\sqrt{3}| \\
\implies |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|
$$</p>
<p>If I take $z_0 = 1 - i\sqrt{3}$, then the equation</p>
<p>$$
|z - (-z_0)| = |z - z_0|
$$</p>
<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>
<p>$$
m_{PQ} = \frac{\sqrt{3} + \sqrt{3}}{-1-1} = -\sqrt{3}.
$$</p>
<p>Hence a line perpendicular to $PQ$ has a slope $1/\sqrt{3}$. The required line is thus</p>
<p>$$
y = \frac{x}{\sqrt{3}} \\
\implies \sqrt{3}y = x.
$$</p>
Answer 11: <p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>
<p>$$ 4a - 4\sqrt{3}b = 0 \Rightarrow a = \sqrt{3} b.$$</p>
<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \sqrt{3}b$, which in the complex plane is a straight line.</p>
<p>This was to be expected, because the equation can be rewritten as
$$ |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|,$$
from where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\sqrt{3})$ and $(1-i\sqrt{3})$.</p>
Answer 12: <p>Write </p>
<p>$|z-(-1+i\sqrt{3})|=|z-(1-i\sqrt{3})|$</p>
<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\sqrt{3}$ and $B=1-i\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\sqrt{3}x$.</p>
Answer 13: <p>Hint-</p>
<p>You are on right way. Square both sides to find a, b. </p>
Answer 14: <p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>
<p>For example let $w=1-i\sqrt{3}$ then we can rewrite:</p>
<p>$|z+w|=|z-w| \implies |z+w|^2=|z-w|^2 \implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>
<p>This can be rewritten as:</p>
<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$
$$wz^*+zw^*=-wz^*-zw^* \implies 2wz^*=-2zw^* \implies wz^*=zw^*$$</p>
<p>Now plug in $z = x+iy$ and $w=1-i\sqrt{3}$. It should be easy to solve from here.</p>
Answer 15: <p>$$
|z + 1 - i\sqrt{3}| = |z - 1 + i\sqrt{3}| \\
\implies |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|
$$</p>
<p>If I take $z_0 = 1 - i\sqrt{3}$, then the equation</p>
<p>$$
|z - (-z_0)| = |z - z_0|
$$</p>
<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>
<p>$$
m_{PQ} = \frac{\sqrt{3} + \sqrt{3}}{-1-1} = -\sqrt{3}.
$$</p>
<p>Hence a line perpendicular to $PQ$ has a slope $1/\sqrt{3}$. The required line is thus</p>
<p>$$
y = \frac{x}{\sqrt{3}} \\
\implies \sqrt{3}y = x.
$$</p>
Answer 16: <p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>
<p>$$ 4a - 4\sqrt{3}b = 0 \Rightarrow a = \sqrt{3} b.$$</p>
<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \sqrt{3}b$, which in the complex plane is a straight line.</p>
<p>This was to be expected, because the equation can be rewritten as
$$ |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|,$$
from where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\sqrt{3})$ and $(1-i\sqrt{3})$.</p>
Answer 17: <p>Write </p>
<p>$|z-(-1+i\sqrt{3})|=|z-(1-i\sqrt{3})|$</p>
<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\sqrt{3}$ and $B=1-i\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\sqrt{3}x$.</p>
Answer 18: <p>Hint-</p>
<p>You are on right way. Square both sides to find a, b. </p>
Answer 19: <p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>
<p>For example let $w=1-i\sqrt{3}$ then we can rewrite:</p>
<p>$|z+w|=|z-w| \implies |z+w|^2=|z-w|^2 \implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>
<p>This can be rewritten as:</p>
<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$
$$wz^*+zw^*=-wz^*-zw^* \implies 2wz^*=-2zw^* \implies wz^*=zw^*$$</p>
<p>Now plug in $z = x+iy$ and $w=1-i\sqrt{3}$. It should be easy to solve from here.</p>
Answer 20: <p>$$
|z + 1 - i\sqrt{3}| = |z - 1 + i\sqrt{3}| \\
\implies |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|
$$</p>
<p>If I take $z_0 = 1 - i\sqrt{3}$, then the equation</p>
<p>$$
|z - (-z_0)| = |z - z_0|
$$</p>
<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>
<p>$$
m_{PQ} = \frac{\sqrt{3} + \sqrt{3}}{-1-1} = -\sqrt{3}.
$$</p>
<p>Hence a line perpendicular to $PQ$ has a slope $1/\sqrt{3}$. The required line is thus</p>
<p>$$
y = \frac{x}{\sqrt{3}} \\
\implies \sqrt{3}y = x.
$$</p>
Answer 21: <p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>
<p>$$ 4a - 4\sqrt{3}b = 0 \Rightarrow a = \sqrt{3} b.$$</p>
<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \sqrt{3}b$, which in the complex plane is a straight line.</p>
<p>This was to be expected, because the equation can be rewritten as
$$ |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|,$$
from where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\sqrt{3})$ and $(1-i\sqrt{3})$.</p>
Answer 22: <p>Write </p>
<p>$|z-(-1+i\sqrt{3})|=|z-(1-i\sqrt{3})|$</p>
<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\sqrt{3}$ and $B=1-i\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\sqrt{3}x$.</p>
Answer 23: <p>Hint-</p>
<p>You are on right way. Square both sides to find a, b. </p>
Answer 24: <p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>
<p>For example let $w=1-i\sqrt{3}$ then we can rewrite:</p>
<p>$|z+w|=|z-w| \implies |z+w|^2=|z-w|^2 \implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>
<p>This can be rewritten as:</p>
<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$
$$wz^*+zw^*=-wz^*-zw^* \implies 2wz^*=-2zw^* \implies wz^*=zw^*$$</p>
<p>Now plug in $z = x+iy$ and $w=1-i\sqrt{3}$. It should be easy to solve from here.</p>
Answer 25: <p>$$
|z + 1 - i\sqrt{3}| = |z - 1 + i\sqrt{3}| \\
\implies |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|
$$</p>
<p>If I take $z_0 = 1 - i\sqrt{3}$, then the equation</p>
<p>$$
|z - (-z_0)| = |z - z_0|
$$</p>
<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>
<p>$$
m_{PQ} = \frac{\sqrt{3} + \sqrt{3}}{-1-1} = -\sqrt{3}.
$$</p>
<p>Hence a line perpendicular to $PQ$ has a slope $1/\sqrt{3}$. The required line is thus</p>
<p>$$
y = \frac{x}{\sqrt{3}} \\
\implies \sqrt{3}y = x.
$$</p>
Answer 26: <p>Yes, this is a correct way to approach the problem. By squaring each side of your last line and simplifying, one obtains:</p>
<p>$$ 4a - 4\sqrt{3}b = 0 \Rightarrow a = \sqrt{3} b.$$</p>
<p>Thus the set that you are looking for consists of all the complex numbers $z = a + bi$ such that $a = \sqrt{3}b$, which in the complex plane is a straight line.</p>
<p>This was to be expected, because the equation can be rewritten as
$$ |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|,$$
from where the problem can be solved geometrically: you are looking for points of the complex plane at an equal distance from $(-1 + i\sqrt{3})$ and $(1-i\sqrt{3})$.</p>
Answer 27: <p>Write </p>
<p>$|z-(-1+i\sqrt{3})|=|z-(1-i\sqrt{3})|$</p>
<p>This is the <strong>locus</strong> of all $z$ which thier diatance from two points $A=-1+i\sqrt{3}$ and $B=1-i\sqrt{3}$ is equal. It halfs the fragment $AB$ which should be $y=\sqrt{3}x$.</p>
Answer 28: <p>Hint-</p>
<p>You are on right way. Square both sides to find a, b. </p>
Answer 29: <p>Hint: Square the equation and rewrite the square of the absolute values as the product of the complex number and its complex conjugate.</p>
<p>For example let $w=1-i\sqrt{3}$ then we can rewrite:</p>
<p>$|z+w|=|z-w| \implies |z+w|^2=|z-w|^2 \implies (z+w)(z^*+w^*)=(z-w)(z^*-w^*).$</p>
<p>This can be rewritten as:</p>
<p>$$zz^*+wz^*+zw^*+ww^*=zz^*-wz^*-zw^*+ww^*$$
$$wz^*+zw^*=-wz^*-zw^* \implies 2wz^*=-2zw^* \implies wz^*=zw^*$$</p>
<p>Now plug in $z = x+iy$ and $w=1-i\sqrt{3}$. It should be easy to solve from here.</p>
Answer 30: <p>$$
|z + 1 - i\sqrt{3}| = |z - 1 + i\sqrt{3}| \\
\implies |z - (-1 + i\sqrt{3})| = |z - (1 - i\sqrt{3})|
$$</p>
<p>If I take $z_0 = 1 - i\sqrt{3}$, then the equation</p>
<p>$$
|z - (-z_0)| = |z - z_0|
$$</p>
<p>describes the perpendicular bisector of the straight line joining the points $P(z_0)$ and $Q(-z_0)$. What is clear is that the required line passes through the origin. The slope of $PQ$ is</p>
<p>$$
m_{PQ} = \frac{\sqrt{3} + \sqrt{3}}{-1-1} = -\sqrt{3}.
$$</p>
<p>Hence a line perpendicular to $PQ$ has a slope $1/\sqrt{3}$. The required line is thus</p>
<p>$$
y = \frac{x}{\sqrt{3}} \\
\implies \sqrt{3}y = x.
$$</p>
|
Proof, that $d(x_n,y_n)$ converges to $d(x,y)$(proof explanation) | 2,106,316 | 5 | https://math.stackexchange.com/questions/2106316/proof-that-dx-n-y-n-converges-to-dx-yproof-explanation | <p>In a metric space $(S,d)$, assume that $x_n\,\to\,x$ and $y_n\,\to\,y$. Prove that $d\left(x_n,y_n\right)\,\to\,d(x,y)$.</p>
<p>$\textbf{Proof:}$ Since $x_n\,\to\,x$ and $y_n\,\to\,y$, given $\epsilon>0$ there exists a positive integer $N$ such that as $n\ge N$, we have
\begin{equation}
d\left(x_n,x\right)<\epsilon/2 \text{ and }d\left(y_n,y\right)<\epsilon/2
\end{equation}</p>
<p>Hence, as $n\ge N$, we have</p>
<p>\begin{eqnarray}
\left\lvert\,d\left(x_n,y_n\right)-d\left(x,y\right)\,\right\rvert&\le&\left\lvert\,d\left(x_n,x\right)+d\left(y_n,y\right)\,\right\rvert\\
&=&d\left(x_n,x\right)+d\left(y_n,y\right)\\
&<&\epsilon/2+\epsilon/2\\
&=&\epsilon
\end{eqnarray}
I cannot understand why this is true:
\begin{equation}
\left\lvert\,d\left(x_n,y_n\right)-d\left(x,y\right)\,\right\rvert\le\left\lvert\,d\left(x_n,x\right)+d\left(y_n,y\right)\,\right\rvert
\end{equation}</p>
<p>Can you please explain me that?</p>
| [
{
"answer_id": 2106324,
"score": 5,
"text": "<p><strong>Hint:</strong></p>\n\n<p>By the triangle inequality,</p>\n\n<p>$$ d(x,y) \\leq d(x,u) + d(u,v) +d(v,y) $$</p>\n"
},
{
"answer_id": 2106343,
"score": 5,
"text": "<p>By the triangle inequality,\n$$d(x_n,y_n)\\leq d(x_n,x)+d(x,y)+d(y,y_n)\\tag 1$$ and</p>\n\n<p>$$d(x,y)\\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\\tag 2$$</p>\n\n<p>From $(1)$, we get \n$$d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)\\tag 3$$ \nand from $(2)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\tag 4$$</p>\n\n<p>Combining $(4)$ and $(3)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)$$</p>\n\n<p>and since $d(x_n,x)+d(y_n,y)\\geq 0$, we get </p>\n\n<p>$$\\Big|d(x_n,y_n)-d(x,y)\\Big|\\leq d(x_n,x)+d(y_n,y).$$</p>\n"
},
{
"answer_id": 2106341,
"score": 3,
"text": "<p>Temporarily assume $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>Then using the triangle inequality for each step of this reasoning we have:\n$$\nd(x_n, y) \\leq d(x_n,x) + d(x,y)\\\\\nd(x_n,y_n) \\leq d(x_n, y) + d(y,y_n) \\leq d(x_n,x) + d(x,y) + d(y,y_n) \n$$\nAnd subtracting $d(x,y)$ from both sides\n$$\n|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \\leq d(x_n,x) + d(y,y_n) \\leq |d(x_n,x) + d(y,y_n)|\n$$\nSo the statement holds if $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \\leftrightarrow x$ and $y_n \\leftrightarrow y$ in the above proof, it works for \n$d(x_n,y_n) \\leq d(x,y)$.</p>\n"
},
{
"answer_id": 2106324,
"score": 5,
"text": "<p><strong>Hint:</strong></p>\n\n<p>By the triangle inequality,</p>\n\n<p>$$ d(x,y) \\leq d(x,u) + d(u,v) +d(v,y) $$</p>\n"
},
{
"answer_id": 2106343,
"score": 5,
"text": "<p>By the triangle inequality,\n$$d(x_n,y_n)\\leq d(x_n,x)+d(x,y)+d(y,y_n)\\tag 1$$ and</p>\n\n<p>$$d(x,y)\\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\\tag 2$$</p>\n\n<p>From $(1)$, we get \n$$d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)\\tag 3$$ \nand from $(2)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\tag 4$$</p>\n\n<p>Combining $(4)$ and $(3)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)$$</p>\n\n<p>and since $d(x_n,x)+d(y_n,y)\\geq 0$, we get </p>\n\n<p>$$\\Big|d(x_n,y_n)-d(x,y)\\Big|\\leq d(x_n,x)+d(y_n,y).$$</p>\n"
},
{
"answer_id": 2106341,
"score": 3,
"text": "<p>Temporarily assume $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>Then using the triangle inequality for each step of this reasoning we have:\n$$\nd(x_n, y) \\leq d(x_n,x) + d(x,y)\\\\\nd(x_n,y_n) \\leq d(x_n, y) + d(y,y_n) \\leq d(x_n,x) + d(x,y) + d(y,y_n) \n$$\nAnd subtracting $d(x,y)$ from both sides\n$$\n|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \\leq d(x_n,x) + d(y,y_n) \\leq |d(x_n,x) + d(y,y_n)|\n$$\nSo the statement holds if $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \\leftrightarrow x$ and $y_n \\leftrightarrow y$ in the above proof, it works for \n$d(x_n,y_n) \\leq d(x,y)$.</p>\n"
},
{
"answer_id": 2106324,
"score": 5,
"text": "<p><strong>Hint:</strong></p>\n\n<p>By the triangle inequality,</p>\n\n<p>$$ d(x,y) \\leq d(x,u) + d(u,v) +d(v,y) $$</p>\n"
},
{
"answer_id": 2106343,
"score": 5,
"text": "<p>By the triangle inequality,\n$$d(x_n,y_n)\\leq d(x_n,x)+d(x,y)+d(y,y_n)\\tag 1$$ and</p>\n\n<p>$$d(x,y)\\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\\tag 2$$</p>\n\n<p>From $(1)$, we get \n$$d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)\\tag 3$$ \nand from $(2)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\tag 4$$</p>\n\n<p>Combining $(4)$ and $(3)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)$$</p>\n\n<p>and since $d(x_n,x)+d(y_n,y)\\geq 0$, we get </p>\n\n<p>$$\\Big|d(x_n,y_n)-d(x,y)\\Big|\\leq d(x_n,x)+d(y_n,y).$$</p>\n"
},
{
"answer_id": 2106341,
"score": 3,
"text": "<p>Temporarily assume $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>Then using the triangle inequality for each step of this reasoning we have:\n$$\nd(x_n, y) \\leq d(x_n,x) + d(x,y)\\\\\nd(x_n,y_n) \\leq d(x_n, y) + d(y,y_n) \\leq d(x_n,x) + d(x,y) + d(y,y_n) \n$$\nAnd subtracting $d(x,y)$ from both sides\n$$\n|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \\leq d(x_n,x) + d(y,y_n) \\leq |d(x_n,x) + d(y,y_n)|\n$$\nSo the statement holds if $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \\leftrightarrow x$ and $y_n \\leftrightarrow y$ in the above proof, it works for \n$d(x_n,y_n) \\leq d(x,y)$.</p>\n"
},
{
"answer_id": 2106324,
"score": 5,
"text": "<p><strong>Hint:</strong></p>\n\n<p>By the triangle inequality,</p>\n\n<p>$$ d(x,y) \\leq d(x,u) + d(u,v) +d(v,y) $$</p>\n"
},
{
"answer_id": 2106343,
"score": 5,
"text": "<p>By the triangle inequality,\n$$d(x_n,y_n)\\leq d(x_n,x)+d(x,y)+d(y,y_n)\\tag 1$$ and</p>\n\n<p>$$d(x,y)\\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\\tag 2$$</p>\n\n<p>From $(1)$, we get \n$$d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)\\tag 3$$ \nand from $(2)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\tag 4$$</p>\n\n<p>Combining $(4)$ and $(3)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)$$</p>\n\n<p>and since $d(x_n,x)+d(y_n,y)\\geq 0$, we get </p>\n\n<p>$$\\Big|d(x_n,y_n)-d(x,y)\\Big|\\leq d(x_n,x)+d(y_n,y).$$</p>\n"
},
{
"answer_id": 2106341,
"score": 3,
"text": "<p>Temporarily assume $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>Then using the triangle inequality for each step of this reasoning we have:\n$$\nd(x_n, y) \\leq d(x_n,x) + d(x,y)\\\\\nd(x_n,y_n) \\leq d(x_n, y) + d(y,y_n) \\leq d(x_n,x) + d(x,y) + d(y,y_n) \n$$\nAnd subtracting $d(x,y)$ from both sides\n$$\n|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \\leq d(x_n,x) + d(y,y_n) \\leq |d(x_n,x) + d(y,y_n)|\n$$\nSo the statement holds if $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \\leftrightarrow x$ and $y_n \\leftrightarrow y$ in the above proof, it works for \n$d(x_n,y_n) \\leq d(x,y)$.</p>\n"
},
{
"answer_id": 2106324,
"score": 5,
"text": "<p><strong>Hint:</strong></p>\n\n<p>By the triangle inequality,</p>\n\n<p>$$ d(x,y) \\leq d(x,u) + d(u,v) +d(v,y) $$</p>\n"
},
{
"answer_id": 2106343,
"score": 5,
"text": "<p>By the triangle inequality,\n$$d(x_n,y_n)\\leq d(x_n,x)+d(x,y)+d(y,y_n)\\tag 1$$ and</p>\n\n<p>$$d(x,y)\\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\\tag 2$$</p>\n\n<p>From $(1)$, we get \n$$d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)\\tag 3$$ \nand from $(2)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\tag 4$$</p>\n\n<p>Combining $(4)$ and $(3)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)$$</p>\n\n<p>and since $d(x_n,x)+d(y_n,y)\\geq 0$, we get </p>\n\n<p>$$\\Big|d(x_n,y_n)-d(x,y)\\Big|\\leq d(x_n,x)+d(y_n,y).$$</p>\n"
},
{
"answer_id": 2106341,
"score": 3,
"text": "<p>Temporarily assume $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>Then using the triangle inequality for each step of this reasoning we have:\n$$\nd(x_n, y) \\leq d(x_n,x) + d(x,y)\\\\\nd(x_n,y_n) \\leq d(x_n, y) + d(y,y_n) \\leq d(x_n,x) + d(x,y) + d(y,y_n) \n$$\nAnd subtracting $d(x,y)$ from both sides\n$$\n|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \\leq d(x_n,x) + d(y,y_n) \\leq |d(x_n,x) + d(y,y_n)|\n$$\nSo the statement holds if $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \\leftrightarrow x$ and $y_n \\leftrightarrow y$ in the above proof, it works for \n$d(x_n,y_n) \\leq d(x,y)$.</p>\n"
},
{
"answer_id": 2106324,
"score": 5,
"text": "<p><strong>Hint:</strong></p>\n\n<p>By the triangle inequality,</p>\n\n<p>$$ d(x,y) \\leq d(x,u) + d(u,v) +d(v,y) $$</p>\n"
},
{
"answer_id": 2106343,
"score": 5,
"text": "<p>By the triangle inequality,\n$$d(x_n,y_n)\\leq d(x_n,x)+d(x,y)+d(y,y_n)\\tag 1$$ and</p>\n\n<p>$$d(x,y)\\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\\tag 2$$</p>\n\n<p>From $(1)$, we get \n$$d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)\\tag 3$$ \nand from $(2)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\tag 4$$</p>\n\n<p>Combining $(4)$ and $(3)$, we get\n$$-[d(x_n,x)+d(y_n,y)]\\leq d(x_n,y_n)-d(x,y)\\leq d(x_n,x)+d(y_n,y)$$</p>\n\n<p>and since $d(x_n,x)+d(y_n,y)\\geq 0$, we get </p>\n\n<p>$$\\Big|d(x_n,y_n)-d(x,y)\\Big|\\leq d(x_n,x)+d(y_n,y).$$</p>\n"
},
{
"answer_id": 2106341,
"score": 3,
"text": "<p>Temporarily assume $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>Then using the triangle inequality for each step of this reasoning we have:\n$$\nd(x_n, y) \\leq d(x_n,x) + d(x,y)\\\\\nd(x_n,y_n) \\leq d(x_n, y) + d(y,y_n) \\leq d(x_n,x) + d(x,y) + d(y,y_n) \n$$\nAnd subtracting $d(x,y)$ from both sides\n$$\n|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \\leq d(x_n,x) + d(y,y_n) \\leq |d(x_n,x) + d(y,y_n)|\n$$\nSo the statement holds if $d(x_n,y_n) \\geq d(x,y)$.</p>\n\n<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \\leftrightarrow x$ and $y_n \\leftrightarrow y$ in the above proof, it works for \n$d(x_n,y_n) \\leq d(x,y)$.</p>\n"
}
] | Question: Proof, that $d(x_n,y_n)$ converges to $d(x,y)$(proof explanation)
Answer 1: <p><strong>Hint:</strong></p>
<p>By the triangle inequality,</p>
<p>$$ d(x,y) \leq d(x,u) + d(u,v) +d(v,y) $$</p>
Answer 2: <p>By the triangle inequality,
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)\tag 1$$ and</p>
<p>$$d(x,y)\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\tag 2$$</p>
<p>From $(1)$, we get
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)\tag 3$$
and from $(2)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\tag 4$$</p>
<p>Combining $(4)$ and $(3)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)$$</p>
<p>and since $d(x_n,x)+d(y_n,y)\geq 0$, we get </p>
<p>$$\Big|d(x_n,y_n)-d(x,y)\Big|\leq d(x_n,x)+d(y_n,y).$$</p>
Answer 3: <p>Temporarily assume $d(x_n,y_n) \geq d(x,y)$.</p>
<p>Then using the triangle inequality for each step of this reasoning we have:
$$
d(x_n, y) \leq d(x_n,x) + d(x,y)\\
d(x_n,y_n) \leq d(x_n, y) + d(y,y_n) \leq d(x_n,x) + d(x,y) + d(y,y_n)
$$
And subtracting $d(x,y)$ from both sides
$$
|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \leq d(x_n,x) + d(y,y_n) \leq |d(x_n,x) + d(y,y_n)|
$$
So the statement holds if $d(x_n,y_n) \geq d(x,y)$.</p>
<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \leftrightarrow x$ and $y_n \leftrightarrow y$ in the above proof, it works for
$d(x_n,y_n) \leq d(x,y)$.</p>
Answer 4: <p><strong>Hint:</strong></p>
<p>By the triangle inequality,</p>
<p>$$ d(x,y) \leq d(x,u) + d(u,v) +d(v,y) $$</p>
Answer 5: <p>By the triangle inequality,
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)\tag 1$$ and</p>
<p>$$d(x,y)\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\tag 2$$</p>
<p>From $(1)$, we get
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)\tag 3$$
and from $(2)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\tag 4$$</p>
<p>Combining $(4)$ and $(3)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)$$</p>
<p>and since $d(x_n,x)+d(y_n,y)\geq 0$, we get </p>
<p>$$\Big|d(x_n,y_n)-d(x,y)\Big|\leq d(x_n,x)+d(y_n,y).$$</p>
Answer 6: <p>Temporarily assume $d(x_n,y_n) \geq d(x,y)$.</p>
<p>Then using the triangle inequality for each step of this reasoning we have:
$$
d(x_n, y) \leq d(x_n,x) + d(x,y)\\
d(x_n,y_n) \leq d(x_n, y) + d(y,y_n) \leq d(x_n,x) + d(x,y) + d(y,y_n)
$$
And subtracting $d(x,y)$ from both sides
$$
|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \leq d(x_n,x) + d(y,y_n) \leq |d(x_n,x) + d(y,y_n)|
$$
So the statement holds if $d(x_n,y_n) \geq d(x,y)$.</p>
<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \leftrightarrow x$ and $y_n \leftrightarrow y$ in the above proof, it works for
$d(x_n,y_n) \leq d(x,y)$.</p>
Answer 7: <p><strong>Hint:</strong></p>
<p>By the triangle inequality,</p>
<p>$$ d(x,y) \leq d(x,u) + d(u,v) +d(v,y) $$</p>
Answer 8: <p>By the triangle inequality,
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)\tag 1$$ and</p>
<p>$$d(x,y)\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\tag 2$$</p>
<p>From $(1)$, we get
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)\tag 3$$
and from $(2)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\tag 4$$</p>
<p>Combining $(4)$ and $(3)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)$$</p>
<p>and since $d(x_n,x)+d(y_n,y)\geq 0$, we get </p>
<p>$$\Big|d(x_n,y_n)-d(x,y)\Big|\leq d(x_n,x)+d(y_n,y).$$</p>
Answer 9: <p>Temporarily assume $d(x_n,y_n) \geq d(x,y)$.</p>
<p>Then using the triangle inequality for each step of this reasoning we have:
$$
d(x_n, y) \leq d(x_n,x) + d(x,y)\\
d(x_n,y_n) \leq d(x_n, y) + d(y,y_n) \leq d(x_n,x) + d(x,y) + d(y,y_n)
$$
And subtracting $d(x,y)$ from both sides
$$
|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \leq d(x_n,x) + d(y,y_n) \leq |d(x_n,x) + d(y,y_n)|
$$
So the statement holds if $d(x_n,y_n) \geq d(x,y)$.</p>
<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \leftrightarrow x$ and $y_n \leftrightarrow y$ in the above proof, it works for
$d(x_n,y_n) \leq d(x,y)$.</p>
Answer 10: <p><strong>Hint:</strong></p>
<p>By the triangle inequality,</p>
<p>$$ d(x,y) \leq d(x,u) + d(u,v) +d(v,y) $$</p>
Answer 11: <p>By the triangle inequality,
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)\tag 1$$ and</p>
<p>$$d(x,y)\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\tag 2$$</p>
<p>From $(1)$, we get
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)\tag 3$$
and from $(2)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\tag 4$$</p>
<p>Combining $(4)$ and $(3)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)$$</p>
<p>and since $d(x_n,x)+d(y_n,y)\geq 0$, we get </p>
<p>$$\Big|d(x_n,y_n)-d(x,y)\Big|\leq d(x_n,x)+d(y_n,y).$$</p>
Answer 12: <p>Temporarily assume $d(x_n,y_n) \geq d(x,y)$.</p>
<p>Then using the triangle inequality for each step of this reasoning we have:
$$
d(x_n, y) \leq d(x_n,x) + d(x,y)\\
d(x_n,y_n) \leq d(x_n, y) + d(y,y_n) \leq d(x_n,x) + d(x,y) + d(y,y_n)
$$
And subtracting $d(x,y)$ from both sides
$$
|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \leq d(x_n,x) + d(y,y_n) \leq |d(x_n,x) + d(y,y_n)|
$$
So the statement holds if $d(x_n,y_n) \geq d(x,y)$.</p>
<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \leftrightarrow x$ and $y_n \leftrightarrow y$ in the above proof, it works for
$d(x_n,y_n) \leq d(x,y)$.</p>
Answer 13: <p><strong>Hint:</strong></p>
<p>By the triangle inequality,</p>
<p>$$ d(x,y) \leq d(x,u) + d(u,v) +d(v,y) $$</p>
Answer 14: <p>By the triangle inequality,
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)\tag 1$$ and</p>
<p>$$d(x,y)\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\tag 2$$</p>
<p>From $(1)$, we get
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)\tag 3$$
and from $(2)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\tag 4$$</p>
<p>Combining $(4)$ and $(3)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)$$</p>
<p>and since $d(x_n,x)+d(y_n,y)\geq 0$, we get </p>
<p>$$\Big|d(x_n,y_n)-d(x,y)\Big|\leq d(x_n,x)+d(y_n,y).$$</p>
Answer 15: <p>Temporarily assume $d(x_n,y_n) \geq d(x,y)$.</p>
<p>Then using the triangle inequality for each step of this reasoning we have:
$$
d(x_n, y) \leq d(x_n,x) + d(x,y)\\
d(x_n,y_n) \leq d(x_n, y) + d(y,y_n) \leq d(x_n,x) + d(x,y) + d(y,y_n)
$$
And subtracting $d(x,y)$ from both sides
$$
|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \leq d(x_n,x) + d(y,y_n) \leq |d(x_n,x) + d(y,y_n)|
$$
So the statement holds if $d(x_n,y_n) \geq d(x,y)$.</p>
<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \leftrightarrow x$ and $y_n \leftrightarrow y$ in the above proof, it works for
$d(x_n,y_n) \leq d(x,y)$.</p>
Answer 16: <p><strong>Hint:</strong></p>
<p>By the triangle inequality,</p>
<p>$$ d(x,y) \leq d(x,u) + d(u,v) +d(v,y) $$</p>
Answer 17: <p>By the triangle inequality,
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)\tag 1$$ and</p>
<p>$$d(x,y)\leq d(x,x_n)+d(x_n,y_n)+d(y_n,y)\tag 2$$</p>
<p>From $(1)$, we get
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)\tag 3$$
and from $(2)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\tag 4$$</p>
<p>Combining $(4)$ and $(3)$, we get
$$-[d(x_n,x)+d(y_n,y)]\leq d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y_n,y)$$</p>
<p>and since $d(x_n,x)+d(y_n,y)\geq 0$, we get </p>
<p>$$\Big|d(x_n,y_n)-d(x,y)\Big|\leq d(x_n,x)+d(y_n,y).$$</p>
Answer 18: <p>Temporarily assume $d(x_n,y_n) \geq d(x,y)$.</p>
<p>Then using the triangle inequality for each step of this reasoning we have:
$$
d(x_n, y) \leq d(x_n,x) + d(x,y)\\
d(x_n,y_n) \leq d(x_n, y) + d(y,y_n) \leq d(x_n,x) + d(x,y) + d(y,y_n)
$$
And subtracting $d(x,y)$ from both sides
$$
|d(x_n,y_n) -d(x,y)| = d(x_n,y_n) -d(x,y) \leq d(x_n,x) + d(y,y_n) \leq |d(x_n,x) + d(y,y_n)|
$$
So the statement holds if $d(x_n,y_n) \geq d(x,y)$.</p>
<p>The same type of reasoning (about going around a quadrilateral in two ways) holds when $d(x_n,y_n) < d(x,y)$. In fact, if you just swap $x_n \leftrightarrow x$ and $y_n \leftrightarrow y$ in the above proof, it works for
$d(x_n,y_n) \leq d(x,y)$.</p>
|
Integrating $\int \frac{\cos x}{\sin x+\cos x}dx$. | 2,106,325 | 5 | https://math.stackexchange.com/questions/2106325/integrating-int-frac-cos-x-sin-x-cos-xdx | <p>so I've just had my first exam (went pretty well) but I ran into this thing as the first part of the last question.</p>
<p>$$\int \frac{\cos x}{\sin x+\cos x}dx$$</p>
<p>I had a look on wolfram after the exam and it advised to multiply top and bottom by $\sec^3x$. </p>
<p>Is there another way to tackle this if you didn't know that trick? I find it hard to believe I was meant to know this and it was disproportionately harder than any type of integration question I've come across when practicing.</p>
<p>I couldn't get anywhere when trying to solve this.</p>
<p>Thanks.</p>
| [
{
"answer_id": 2106338,
"score": 7,
"text": "<p>Note that $\\cos x = \\frac{1}{2} ( 2 \\cos x + \\sin x - \\sin x)$.\nSo then\n$$ \\int \\frac{\\cos x}{\\sin x + \\cos x} dx = \\frac{1}{2} \\int \\frac{\\cos x - \\sin x}{\\cos x + \\sin x}dx + \\frac{1}{2}\\int \\frac{\\cos x + \\sin x}{\\cos x + \\sin x} dx.$$\nThe second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>\n"
},
{
"answer_id": 2106335,
"score": 2,
"text": "<p>Hint: $\\sin(x)+\\cos(x)=\\sqrt{2}\\sin(x+\\pi/4)$ Then substitute $u=x+\\pi/4$.</p>\n\n<p>A more general approach would be the following. Replace $\\cos(x)$ and $\\sin(x)$ by Euler's formula.</p>\n\n<p>$$\\int \\frac{f(\\cos(x),\\sin(x))}{g(\\cos(x),\\sin(x))}dx=\\int \\frac{f(\\exp(ix),\\exp(-ix))}{g(\\exp(ix),\\exp(-ix))}dx.$$</p>\n\n<p>Now, substitute $u=\\exp(ix) \\implies du=iudx$.</p>\n\n<p>$$\\int \\frac{f(u,u^{-1})}{g(u,u^{-1})}\\frac{du}{iu}=\\int \\frac{f(u)}{g(u)}\\frac{du}{iu}.$$</p>\n\n<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>\n"
},
{
"answer_id": 2106340,
"score": 2,
"text": "<p><strong>Method 1:-</strong><span class=\"math-container\">$$\\frac{cosx}{cosx+sinx}=\\frac{cos(x-π/4 +π/4)}{\\sqrt {2}cos(x-π/4)}=\\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>\n<p><strong>Method 2:-</strong> Write the numerator</p>\n<p><span class=\"math-container\">$cosx=A(cosx+sinx)+B\\frac{d}{dx}(cosx+sinx)$</span></p>\n<p>to find <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$B$</span> then the answer will be <span class=\"math-container\">$Ax+Blog|cosx+sinx|+c$</span></p>\n"
},
{
"answer_id": 2106392,
"score": 1,
"text": "<p>HINT:</p>\n\n<p>For $I=\\displaystyle\\int\\dfrac{a\\cos x+b\\sin x}{c\\cos x+d\\sin x}dx,$</p>\n\n<p>write $a\\cos x+b\\sin x=A(c\\cos x+d\\sin x)+B\\cdot\\dfrac{d(c\\cos x+d\\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>\n\n<p>$I=\\displaystyle A\\int\\ dx+B\\int\\dfrac{d(c\\cos x+d\\sin x)}{c\\cos x+d\\sin x}$</p>\n\n<p>Compare the coefficients of $\\cos x,\\sin x$ to find $A,B$</p>\n"
},
{
"answer_id": 2106464,
"score": 1,
"text": "<p>Dividing numerator and denominator by $\\cos x$ yields</p>\n\n<p>$$\\frac{dx}{\\tan x+1}$$</p>\n\n<p>Now substitute $x=\\tan^{-1}t,dx=\\dfrac{dt}{1+t^2}$ to get</p>\n\n<p>$$\\int\\frac{dt}{(t+1)(t^2+1)}$$</p>\n\n<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>\n"
},
{
"answer_id": 2106338,
"score": 7,
"text": "<p>Note that $\\cos x = \\frac{1}{2} ( 2 \\cos x + \\sin x - \\sin x)$.\nSo then\n$$ \\int \\frac{\\cos x}{\\sin x + \\cos x} dx = \\frac{1}{2} \\int \\frac{\\cos x - \\sin x}{\\cos x + \\sin x}dx + \\frac{1}{2}\\int \\frac{\\cos x + \\sin x}{\\cos x + \\sin x} dx.$$\nThe second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>\n"
},
{
"answer_id": 2106335,
"score": 2,
"text": "<p>Hint: $\\sin(x)+\\cos(x)=\\sqrt{2}\\sin(x+\\pi/4)$ Then substitute $u=x+\\pi/4$.</p>\n\n<p>A more general approach would be the following. Replace $\\cos(x)$ and $\\sin(x)$ by Euler's formula.</p>\n\n<p>$$\\int \\frac{f(\\cos(x),\\sin(x))}{g(\\cos(x),\\sin(x))}dx=\\int \\frac{f(\\exp(ix),\\exp(-ix))}{g(\\exp(ix),\\exp(-ix))}dx.$$</p>\n\n<p>Now, substitute $u=\\exp(ix) \\implies du=iudx$.</p>\n\n<p>$$\\int \\frac{f(u,u^{-1})}{g(u,u^{-1})}\\frac{du}{iu}=\\int \\frac{f(u)}{g(u)}\\frac{du}{iu}.$$</p>\n\n<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>\n"
},
{
"answer_id": 2106340,
"score": 2,
"text": "<p><strong>Method 1:-</strong><span class=\"math-container\">$$\\frac{cosx}{cosx+sinx}=\\frac{cos(x-π/4 +π/4)}{\\sqrt {2}cos(x-π/4)}=\\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>\n<p><strong>Method 2:-</strong> Write the numerator</p>\n<p><span class=\"math-container\">$cosx=A(cosx+sinx)+B\\frac{d}{dx}(cosx+sinx)$</span></p>\n<p>to find <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$B$</span> then the answer will be <span class=\"math-container\">$Ax+Blog|cosx+sinx|+c$</span></p>\n"
},
{
"answer_id": 2106392,
"score": 1,
"text": "<p>HINT:</p>\n\n<p>For $I=\\displaystyle\\int\\dfrac{a\\cos x+b\\sin x}{c\\cos x+d\\sin x}dx,$</p>\n\n<p>write $a\\cos x+b\\sin x=A(c\\cos x+d\\sin x)+B\\cdot\\dfrac{d(c\\cos x+d\\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>\n\n<p>$I=\\displaystyle A\\int\\ dx+B\\int\\dfrac{d(c\\cos x+d\\sin x)}{c\\cos x+d\\sin x}$</p>\n\n<p>Compare the coefficients of $\\cos x,\\sin x$ to find $A,B$</p>\n"
},
{
"answer_id": 2106464,
"score": 1,
"text": "<p>Dividing numerator and denominator by $\\cos x$ yields</p>\n\n<p>$$\\frac{dx}{\\tan x+1}$$</p>\n\n<p>Now substitute $x=\\tan^{-1}t,dx=\\dfrac{dt}{1+t^2}$ to get</p>\n\n<p>$$\\int\\frac{dt}{(t+1)(t^2+1)}$$</p>\n\n<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>\n"
},
{
"answer_id": 2106338,
"score": 7,
"text": "<p>Note that $\\cos x = \\frac{1}{2} ( 2 \\cos x + \\sin x - \\sin x)$.\nSo then\n$$ \\int \\frac{\\cos x}{\\sin x + \\cos x} dx = \\frac{1}{2} \\int \\frac{\\cos x - \\sin x}{\\cos x + \\sin x}dx + \\frac{1}{2}\\int \\frac{\\cos x + \\sin x}{\\cos x + \\sin x} dx.$$\nThe second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>\n"
},
{
"answer_id": 2106335,
"score": 2,
"text": "<p>Hint: $\\sin(x)+\\cos(x)=\\sqrt{2}\\sin(x+\\pi/4)$ Then substitute $u=x+\\pi/4$.</p>\n\n<p>A more general approach would be the following. Replace $\\cos(x)$ and $\\sin(x)$ by Euler's formula.</p>\n\n<p>$$\\int \\frac{f(\\cos(x),\\sin(x))}{g(\\cos(x),\\sin(x))}dx=\\int \\frac{f(\\exp(ix),\\exp(-ix))}{g(\\exp(ix),\\exp(-ix))}dx.$$</p>\n\n<p>Now, substitute $u=\\exp(ix) \\implies du=iudx$.</p>\n\n<p>$$\\int \\frac{f(u,u^{-1})}{g(u,u^{-1})}\\frac{du}{iu}=\\int \\frac{f(u)}{g(u)}\\frac{du}{iu}.$$</p>\n\n<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>\n"
},
{
"answer_id": 2106340,
"score": 2,
"text": "<p><strong>Method 1:-</strong><span class=\"math-container\">$$\\frac{cosx}{cosx+sinx}=\\frac{cos(x-π/4 +π/4)}{\\sqrt {2}cos(x-π/4)}=\\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>\n<p><strong>Method 2:-</strong> Write the numerator</p>\n<p><span class=\"math-container\">$cosx=A(cosx+sinx)+B\\frac{d}{dx}(cosx+sinx)$</span></p>\n<p>to find <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$B$</span> then the answer will be <span class=\"math-container\">$Ax+Blog|cosx+sinx|+c$</span></p>\n"
},
{
"answer_id": 2106392,
"score": 1,
"text": "<p>HINT:</p>\n\n<p>For $I=\\displaystyle\\int\\dfrac{a\\cos x+b\\sin x}{c\\cos x+d\\sin x}dx,$</p>\n\n<p>write $a\\cos x+b\\sin x=A(c\\cos x+d\\sin x)+B\\cdot\\dfrac{d(c\\cos x+d\\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>\n\n<p>$I=\\displaystyle A\\int\\ dx+B\\int\\dfrac{d(c\\cos x+d\\sin x)}{c\\cos x+d\\sin x}$</p>\n\n<p>Compare the coefficients of $\\cos x,\\sin x$ to find $A,B$</p>\n"
},
{
"answer_id": 2106464,
"score": 1,
"text": "<p>Dividing numerator and denominator by $\\cos x$ yields</p>\n\n<p>$$\\frac{dx}{\\tan x+1}$$</p>\n\n<p>Now substitute $x=\\tan^{-1}t,dx=\\dfrac{dt}{1+t^2}$ to get</p>\n\n<p>$$\\int\\frac{dt}{(t+1)(t^2+1)}$$</p>\n\n<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>\n"
},
{
"answer_id": 2106338,
"score": 7,
"text": "<p>Note that $\\cos x = \\frac{1}{2} ( 2 \\cos x + \\sin x - \\sin x)$.\nSo then\n$$ \\int \\frac{\\cos x}{\\sin x + \\cos x} dx = \\frac{1}{2} \\int \\frac{\\cos x - \\sin x}{\\cos x + \\sin x}dx + \\frac{1}{2}\\int \\frac{\\cos x + \\sin x}{\\cos x + \\sin x} dx.$$\nThe second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>\n"
},
{
"answer_id": 2106335,
"score": 2,
"text": "<p>Hint: $\\sin(x)+\\cos(x)=\\sqrt{2}\\sin(x+\\pi/4)$ Then substitute $u=x+\\pi/4$.</p>\n\n<p>A more general approach would be the following. Replace $\\cos(x)$ and $\\sin(x)$ by Euler's formula.</p>\n\n<p>$$\\int \\frac{f(\\cos(x),\\sin(x))}{g(\\cos(x),\\sin(x))}dx=\\int \\frac{f(\\exp(ix),\\exp(-ix))}{g(\\exp(ix),\\exp(-ix))}dx.$$</p>\n\n<p>Now, substitute $u=\\exp(ix) \\implies du=iudx$.</p>\n\n<p>$$\\int \\frac{f(u,u^{-1})}{g(u,u^{-1})}\\frac{du}{iu}=\\int \\frac{f(u)}{g(u)}\\frac{du}{iu}.$$</p>\n\n<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>\n"
},
{
"answer_id": 2106340,
"score": 2,
"text": "<p><strong>Method 1:-</strong><span class=\"math-container\">$$\\frac{cosx}{cosx+sinx}=\\frac{cos(x-π/4 +π/4)}{\\sqrt {2}cos(x-π/4)}=\\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>\n<p><strong>Method 2:-</strong> Write the numerator</p>\n<p><span class=\"math-container\">$cosx=A(cosx+sinx)+B\\frac{d}{dx}(cosx+sinx)$</span></p>\n<p>to find <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$B$</span> then the answer will be <span class=\"math-container\">$Ax+Blog|cosx+sinx|+c$</span></p>\n"
},
{
"answer_id": 2106392,
"score": 1,
"text": "<p>HINT:</p>\n\n<p>For $I=\\displaystyle\\int\\dfrac{a\\cos x+b\\sin x}{c\\cos x+d\\sin x}dx,$</p>\n\n<p>write $a\\cos x+b\\sin x=A(c\\cos x+d\\sin x)+B\\cdot\\dfrac{d(c\\cos x+d\\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>\n\n<p>$I=\\displaystyle A\\int\\ dx+B\\int\\dfrac{d(c\\cos x+d\\sin x)}{c\\cos x+d\\sin x}$</p>\n\n<p>Compare the coefficients of $\\cos x,\\sin x$ to find $A,B$</p>\n"
},
{
"answer_id": 2106464,
"score": 1,
"text": "<p>Dividing numerator and denominator by $\\cos x$ yields</p>\n\n<p>$$\\frac{dx}{\\tan x+1}$$</p>\n\n<p>Now substitute $x=\\tan^{-1}t,dx=\\dfrac{dt}{1+t^2}$ to get</p>\n\n<p>$$\\int\\frac{dt}{(t+1)(t^2+1)}$$</p>\n\n<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>\n"
},
{
"answer_id": 2106338,
"score": 7,
"text": "<p>Note that $\\cos x = \\frac{1}{2} ( 2 \\cos x + \\sin x - \\sin x)$.\nSo then\n$$ \\int \\frac{\\cos x}{\\sin x + \\cos x} dx = \\frac{1}{2} \\int \\frac{\\cos x - \\sin x}{\\cos x + \\sin x}dx + \\frac{1}{2}\\int \\frac{\\cos x + \\sin x}{\\cos x + \\sin x} dx.$$\nThe second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>\n"
},
{
"answer_id": 2106335,
"score": 2,
"text": "<p>Hint: $\\sin(x)+\\cos(x)=\\sqrt{2}\\sin(x+\\pi/4)$ Then substitute $u=x+\\pi/4$.</p>\n\n<p>A more general approach would be the following. Replace $\\cos(x)$ and $\\sin(x)$ by Euler's formula.</p>\n\n<p>$$\\int \\frac{f(\\cos(x),\\sin(x))}{g(\\cos(x),\\sin(x))}dx=\\int \\frac{f(\\exp(ix),\\exp(-ix))}{g(\\exp(ix),\\exp(-ix))}dx.$$</p>\n\n<p>Now, substitute $u=\\exp(ix) \\implies du=iudx$.</p>\n\n<p>$$\\int \\frac{f(u,u^{-1})}{g(u,u^{-1})}\\frac{du}{iu}=\\int \\frac{f(u)}{g(u)}\\frac{du}{iu}.$$</p>\n\n<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>\n"
},
{
"answer_id": 2106340,
"score": 2,
"text": "<p><strong>Method 1:-</strong><span class=\"math-container\">$$\\frac{cosx}{cosx+sinx}=\\frac{cos(x-π/4 +π/4)}{\\sqrt {2}cos(x-π/4)}=\\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>\n<p><strong>Method 2:-</strong> Write the numerator</p>\n<p><span class=\"math-container\">$cosx=A(cosx+sinx)+B\\frac{d}{dx}(cosx+sinx)$</span></p>\n<p>to find <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$B$</span> then the answer will be <span class=\"math-container\">$Ax+Blog|cosx+sinx|+c$</span></p>\n"
},
{
"answer_id": 2106392,
"score": 1,
"text": "<p>HINT:</p>\n\n<p>For $I=\\displaystyle\\int\\dfrac{a\\cos x+b\\sin x}{c\\cos x+d\\sin x}dx,$</p>\n\n<p>write $a\\cos x+b\\sin x=A(c\\cos x+d\\sin x)+B\\cdot\\dfrac{d(c\\cos x+d\\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>\n\n<p>$I=\\displaystyle A\\int\\ dx+B\\int\\dfrac{d(c\\cos x+d\\sin x)}{c\\cos x+d\\sin x}$</p>\n\n<p>Compare the coefficients of $\\cos x,\\sin x$ to find $A,B$</p>\n"
},
{
"answer_id": 2106464,
"score": 1,
"text": "<p>Dividing numerator and denominator by $\\cos x$ yields</p>\n\n<p>$$\\frac{dx}{\\tan x+1}$$</p>\n\n<p>Now substitute $x=\\tan^{-1}t,dx=\\dfrac{dt}{1+t^2}$ to get</p>\n\n<p>$$\\int\\frac{dt}{(t+1)(t^2+1)}$$</p>\n\n<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>\n"
},
{
"answer_id": 2106338,
"score": 7,
"text": "<p>Note that $\\cos x = \\frac{1}{2} ( 2 \\cos x + \\sin x - \\sin x)$.\nSo then\n$$ \\int \\frac{\\cos x}{\\sin x + \\cos x} dx = \\frac{1}{2} \\int \\frac{\\cos x - \\sin x}{\\cos x + \\sin x}dx + \\frac{1}{2}\\int \\frac{\\cos x + \\sin x}{\\cos x + \\sin x} dx.$$\nThe second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>\n"
},
{
"answer_id": 2106335,
"score": 2,
"text": "<p>Hint: $\\sin(x)+\\cos(x)=\\sqrt{2}\\sin(x+\\pi/4)$ Then substitute $u=x+\\pi/4$.</p>\n\n<p>A more general approach would be the following. Replace $\\cos(x)$ and $\\sin(x)$ by Euler's formula.</p>\n\n<p>$$\\int \\frac{f(\\cos(x),\\sin(x))}{g(\\cos(x),\\sin(x))}dx=\\int \\frac{f(\\exp(ix),\\exp(-ix))}{g(\\exp(ix),\\exp(-ix))}dx.$$</p>\n\n<p>Now, substitute $u=\\exp(ix) \\implies du=iudx$.</p>\n\n<p>$$\\int \\frac{f(u,u^{-1})}{g(u,u^{-1})}\\frac{du}{iu}=\\int \\frac{f(u)}{g(u)}\\frac{du}{iu}.$$</p>\n\n<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>\n"
},
{
"answer_id": 2106340,
"score": 2,
"text": "<p><strong>Method 1:-</strong><span class=\"math-container\">$$\\frac{cosx}{cosx+sinx}=\\frac{cos(x-π/4 +π/4)}{\\sqrt {2}cos(x-π/4)}=\\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>\n<p><strong>Method 2:-</strong> Write the numerator</p>\n<p><span class=\"math-container\">$cosx=A(cosx+sinx)+B\\frac{d}{dx}(cosx+sinx)$</span></p>\n<p>to find <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$B$</span> then the answer will be <span class=\"math-container\">$Ax+Blog|cosx+sinx|+c$</span></p>\n"
},
{
"answer_id": 2106392,
"score": 1,
"text": "<p>HINT:</p>\n\n<p>For $I=\\displaystyle\\int\\dfrac{a\\cos x+b\\sin x}{c\\cos x+d\\sin x}dx,$</p>\n\n<p>write $a\\cos x+b\\sin x=A(c\\cos x+d\\sin x)+B\\cdot\\dfrac{d(c\\cos x+d\\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>\n\n<p>$I=\\displaystyle A\\int\\ dx+B\\int\\dfrac{d(c\\cos x+d\\sin x)}{c\\cos x+d\\sin x}$</p>\n\n<p>Compare the coefficients of $\\cos x,\\sin x$ to find $A,B$</p>\n"
},
{
"answer_id": 2106464,
"score": 1,
"text": "<p>Dividing numerator and denominator by $\\cos x$ yields</p>\n\n<p>$$\\frac{dx}{\\tan x+1}$$</p>\n\n<p>Now substitute $x=\\tan^{-1}t,dx=\\dfrac{dt}{1+t^2}$ to get</p>\n\n<p>$$\\int\\frac{dt}{(t+1)(t^2+1)}$$</p>\n\n<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>\n"
}
] | Question: Integrating $\int \frac{\cos x}{\sin x+\cos x}dx$.
Answer 1: <p>Note that $\cos x = \frac{1}{2} ( 2 \cos x + \sin x - \sin x)$.
So then
$$ \int \frac{\cos x}{\sin x + \cos x} dx = \frac{1}{2} \int \frac{\cos x - \sin x}{\cos x + \sin x}dx + \frac{1}{2}\int \frac{\cos x + \sin x}{\cos x + \sin x} dx.$$
The second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>
Answer 2: <p>Hint: $\sin(x)+\cos(x)=\sqrt{2}\sin(x+\pi/4)$ Then substitute $u=x+\pi/4$.</p>
<p>A more general approach would be the following. Replace $\cos(x)$ and $\sin(x)$ by Euler's formula.</p>
<p>$$\int \frac{f(\cos(x),\sin(x))}{g(\cos(x),\sin(x))}dx=\int \frac{f(\exp(ix),\exp(-ix))}{g(\exp(ix),\exp(-ix))}dx.$$</p>
<p>Now, substitute $u=\exp(ix) \implies du=iudx$.</p>
<p>$$\int \frac{f(u,u^{-1})}{g(u,u^{-1})}\frac{du}{iu}=\int \frac{f(u)}{g(u)}\frac{du}{iu}.$$</p>
<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>
Answer 3: <p><strong>Method 1:-</strong><span class="math-container">$$\frac{cosx}{cosx+sinx}=\frac{cos(x-π/4 +π/4)}{\sqrt {2}cos(x-π/4)}=\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>
<p><strong>Method 2:-</strong> Write the numerator</p>
<p><span class="math-container">$cosx=A(cosx+sinx)+B\frac{d}{dx}(cosx+sinx)$</span></p>
<p>to find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> then the answer will be <span class="math-container">$Ax+Blog|cosx+sinx|+c$</span></p>
Answer 4: <p>HINT:</p>
<p>For $I=\displaystyle\int\dfrac{a\cos x+b\sin x}{c\cos x+d\sin x}dx,$</p>
<p>write $a\cos x+b\sin x=A(c\cos x+d\sin x)+B\cdot\dfrac{d(c\cos x+d\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>
<p>$I=\displaystyle A\int\ dx+B\int\dfrac{d(c\cos x+d\sin x)}{c\cos x+d\sin x}$</p>
<p>Compare the coefficients of $\cos x,\sin x$ to find $A,B$</p>
Answer 5: <p>Dividing numerator and denominator by $\cos x$ yields</p>
<p>$$\frac{dx}{\tan x+1}$$</p>
<p>Now substitute $x=\tan^{-1}t,dx=\dfrac{dt}{1+t^2}$ to get</p>
<p>$$\int\frac{dt}{(t+1)(t^2+1)}$$</p>
<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>
Answer 6: <p>Note that $\cos x = \frac{1}{2} ( 2 \cos x + \sin x - \sin x)$.
So then
$$ \int \frac{\cos x}{\sin x + \cos x} dx = \frac{1}{2} \int \frac{\cos x - \sin x}{\cos x + \sin x}dx + \frac{1}{2}\int \frac{\cos x + \sin x}{\cos x + \sin x} dx.$$
The second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>
Answer 7: <p>Hint: $\sin(x)+\cos(x)=\sqrt{2}\sin(x+\pi/4)$ Then substitute $u=x+\pi/4$.</p>
<p>A more general approach would be the following. Replace $\cos(x)$ and $\sin(x)$ by Euler's formula.</p>
<p>$$\int \frac{f(\cos(x),\sin(x))}{g(\cos(x),\sin(x))}dx=\int \frac{f(\exp(ix),\exp(-ix))}{g(\exp(ix),\exp(-ix))}dx.$$</p>
<p>Now, substitute $u=\exp(ix) \implies du=iudx$.</p>
<p>$$\int \frac{f(u,u^{-1})}{g(u,u^{-1})}\frac{du}{iu}=\int \frac{f(u)}{g(u)}\frac{du}{iu}.$$</p>
<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>
Answer 8: <p><strong>Method 1:-</strong><span class="math-container">$$\frac{cosx}{cosx+sinx}=\frac{cos(x-π/4 +π/4)}{\sqrt {2}cos(x-π/4)}=\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>
<p><strong>Method 2:-</strong> Write the numerator</p>
<p><span class="math-container">$cosx=A(cosx+sinx)+B\frac{d}{dx}(cosx+sinx)$</span></p>
<p>to find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> then the answer will be <span class="math-container">$Ax+Blog|cosx+sinx|+c$</span></p>
Answer 9: <p>HINT:</p>
<p>For $I=\displaystyle\int\dfrac{a\cos x+b\sin x}{c\cos x+d\sin x}dx,$</p>
<p>write $a\cos x+b\sin x=A(c\cos x+d\sin x)+B\cdot\dfrac{d(c\cos x+d\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>
<p>$I=\displaystyle A\int\ dx+B\int\dfrac{d(c\cos x+d\sin x)}{c\cos x+d\sin x}$</p>
<p>Compare the coefficients of $\cos x,\sin x$ to find $A,B$</p>
Answer 10: <p>Dividing numerator and denominator by $\cos x$ yields</p>
<p>$$\frac{dx}{\tan x+1}$$</p>
<p>Now substitute $x=\tan^{-1}t,dx=\dfrac{dt}{1+t^2}$ to get</p>
<p>$$\int\frac{dt}{(t+1)(t^2+1)}$$</p>
<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>
Answer 11: <p>Note that $\cos x = \frac{1}{2} ( 2 \cos x + \sin x - \sin x)$.
So then
$$ \int \frac{\cos x}{\sin x + \cos x} dx = \frac{1}{2} \int \frac{\cos x - \sin x}{\cos x + \sin x}dx + \frac{1}{2}\int \frac{\cos x + \sin x}{\cos x + \sin x} dx.$$
The second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>
Answer 12: <p>Hint: $\sin(x)+\cos(x)=\sqrt{2}\sin(x+\pi/4)$ Then substitute $u=x+\pi/4$.</p>
<p>A more general approach would be the following. Replace $\cos(x)$ and $\sin(x)$ by Euler's formula.</p>
<p>$$\int \frac{f(\cos(x),\sin(x))}{g(\cos(x),\sin(x))}dx=\int \frac{f(\exp(ix),\exp(-ix))}{g(\exp(ix),\exp(-ix))}dx.$$</p>
<p>Now, substitute $u=\exp(ix) \implies du=iudx$.</p>
<p>$$\int \frac{f(u,u^{-1})}{g(u,u^{-1})}\frac{du}{iu}=\int \frac{f(u)}{g(u)}\frac{du}{iu}.$$</p>
<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>
Answer 13: <p><strong>Method 1:-</strong><span class="math-container">$$\frac{cosx}{cosx+sinx}=\frac{cos(x-π/4 +π/4)}{\sqrt {2}cos(x-π/4)}=\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>
<p><strong>Method 2:-</strong> Write the numerator</p>
<p><span class="math-container">$cosx=A(cosx+sinx)+B\frac{d}{dx}(cosx+sinx)$</span></p>
<p>to find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> then the answer will be <span class="math-container">$Ax+Blog|cosx+sinx|+c$</span></p>
Answer 14: <p>HINT:</p>
<p>For $I=\displaystyle\int\dfrac{a\cos x+b\sin x}{c\cos x+d\sin x}dx,$</p>
<p>write $a\cos x+b\sin x=A(c\cos x+d\sin x)+B\cdot\dfrac{d(c\cos x+d\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>
<p>$I=\displaystyle A\int\ dx+B\int\dfrac{d(c\cos x+d\sin x)}{c\cos x+d\sin x}$</p>
<p>Compare the coefficients of $\cos x,\sin x$ to find $A,B$</p>
Answer 15: <p>Dividing numerator and denominator by $\cos x$ yields</p>
<p>$$\frac{dx}{\tan x+1}$$</p>
<p>Now substitute $x=\tan^{-1}t,dx=\dfrac{dt}{1+t^2}$ to get</p>
<p>$$\int\frac{dt}{(t+1)(t^2+1)}$$</p>
<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>
Answer 16: <p>Note that $\cos x = \frac{1}{2} ( 2 \cos x + \sin x - \sin x)$.
So then
$$ \int \frac{\cos x}{\sin x + \cos x} dx = \frac{1}{2} \int \frac{\cos x - \sin x}{\cos x + \sin x}dx + \frac{1}{2}\int \frac{\cos x + \sin x}{\cos x + \sin x} dx.$$
The second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>
Answer 17: <p>Hint: $\sin(x)+\cos(x)=\sqrt{2}\sin(x+\pi/4)$ Then substitute $u=x+\pi/4$.</p>
<p>A more general approach would be the following. Replace $\cos(x)$ and $\sin(x)$ by Euler's formula.</p>
<p>$$\int \frac{f(\cos(x),\sin(x))}{g(\cos(x),\sin(x))}dx=\int \frac{f(\exp(ix),\exp(-ix))}{g(\exp(ix),\exp(-ix))}dx.$$</p>
<p>Now, substitute $u=\exp(ix) \implies du=iudx$.</p>
<p>$$\int \frac{f(u,u^{-1})}{g(u,u^{-1})}\frac{du}{iu}=\int \frac{f(u)}{g(u)}\frac{du}{iu}.$$</p>
<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>
Answer 18: <p><strong>Method 1:-</strong><span class="math-container">$$\frac{cosx}{cosx+sinx}=\frac{cos(x-π/4 +π/4)}{\sqrt {2}cos(x-π/4)}=\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>
<p><strong>Method 2:-</strong> Write the numerator</p>
<p><span class="math-container">$cosx=A(cosx+sinx)+B\frac{d}{dx}(cosx+sinx)$</span></p>
<p>to find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> then the answer will be <span class="math-container">$Ax+Blog|cosx+sinx|+c$</span></p>
Answer 19: <p>HINT:</p>
<p>For $I=\displaystyle\int\dfrac{a\cos x+b\sin x}{c\cos x+d\sin x}dx,$</p>
<p>write $a\cos x+b\sin x=A(c\cos x+d\sin x)+B\cdot\dfrac{d(c\cos x+d\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>
<p>$I=\displaystyle A\int\ dx+B\int\dfrac{d(c\cos x+d\sin x)}{c\cos x+d\sin x}$</p>
<p>Compare the coefficients of $\cos x,\sin x$ to find $A,B$</p>
Answer 20: <p>Dividing numerator and denominator by $\cos x$ yields</p>
<p>$$\frac{dx}{\tan x+1}$$</p>
<p>Now substitute $x=\tan^{-1}t,dx=\dfrac{dt}{1+t^2}$ to get</p>
<p>$$\int\frac{dt}{(t+1)(t^2+1)}$$</p>
<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>
Answer 21: <p>Note that $\cos x = \frac{1}{2} ( 2 \cos x + \sin x - \sin x)$.
So then
$$ \int \frac{\cos x}{\sin x + \cos x} dx = \frac{1}{2} \int \frac{\cos x - \sin x}{\cos x + \sin x}dx + \frac{1}{2}\int \frac{\cos x + \sin x}{\cos x + \sin x} dx.$$
The second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>
Answer 22: <p>Hint: $\sin(x)+\cos(x)=\sqrt{2}\sin(x+\pi/4)$ Then substitute $u=x+\pi/4$.</p>
<p>A more general approach would be the following. Replace $\cos(x)$ and $\sin(x)$ by Euler's formula.</p>
<p>$$\int \frac{f(\cos(x),\sin(x))}{g(\cos(x),\sin(x))}dx=\int \frac{f(\exp(ix),\exp(-ix))}{g(\exp(ix),\exp(-ix))}dx.$$</p>
<p>Now, substitute $u=\exp(ix) \implies du=iudx$.</p>
<p>$$\int \frac{f(u,u^{-1})}{g(u,u^{-1})}\frac{du}{iu}=\int \frac{f(u)}{g(u)}\frac{du}{iu}.$$</p>
<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>
Answer 23: <p><strong>Method 1:-</strong><span class="math-container">$$\frac{cosx}{cosx+sinx}=\frac{cos(x-π/4 +π/4)}{\sqrt {2}cos(x-π/4)}=\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>
<p><strong>Method 2:-</strong> Write the numerator</p>
<p><span class="math-container">$cosx=A(cosx+sinx)+B\frac{d}{dx}(cosx+sinx)$</span></p>
<p>to find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> then the answer will be <span class="math-container">$Ax+Blog|cosx+sinx|+c$</span></p>
Answer 24: <p>HINT:</p>
<p>For $I=\displaystyle\int\dfrac{a\cos x+b\sin x}{c\cos x+d\sin x}dx,$</p>
<p>write $a\cos x+b\sin x=A(c\cos x+d\sin x)+B\cdot\dfrac{d(c\cos x+d\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>
<p>$I=\displaystyle A\int\ dx+B\int\dfrac{d(c\cos x+d\sin x)}{c\cos x+d\sin x}$</p>
<p>Compare the coefficients of $\cos x,\sin x$ to find $A,B$</p>
Answer 25: <p>Dividing numerator and denominator by $\cos x$ yields</p>
<p>$$\frac{dx}{\tan x+1}$$</p>
<p>Now substitute $x=\tan^{-1}t,dx=\dfrac{dt}{1+t^2}$ to get</p>
<p>$$\int\frac{dt}{(t+1)(t^2+1)}$$</p>
<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>
Answer 26: <p>Note that $\cos x = \frac{1}{2} ( 2 \cos x + \sin x - \sin x)$.
So then
$$ \int \frac{\cos x}{\sin x + \cos x} dx = \frac{1}{2} \int \frac{\cos x - \sin x}{\cos x + \sin x}dx + \frac{1}{2}\int \frac{\cos x + \sin x}{\cos x + \sin x} dx.$$
The second is easy to integrate, and the first is now just a $u$-substituteion as the numerator is the derivative of the denominator.</p>
Answer 27: <p>Hint: $\sin(x)+\cos(x)=\sqrt{2}\sin(x+\pi/4)$ Then substitute $u=x+\pi/4$.</p>
<p>A more general approach would be the following. Replace $\cos(x)$ and $\sin(x)$ by Euler's formula.</p>
<p>$$\int \frac{f(\cos(x),\sin(x))}{g(\cos(x),\sin(x))}dx=\int \frac{f(\exp(ix),\exp(-ix))}{g(\exp(ix),\exp(-ix))}dx.$$</p>
<p>Now, substitute $u=\exp(ix) \implies du=iudx$.</p>
<p>$$\int \frac{f(u,u^{-1})}{g(u,u^{-1})}\frac{du}{iu}=\int \frac{f(u)}{g(u)}\frac{du}{iu}.$$</p>
<p>If $f$ and $g$ are simple polynomials the resulting integral can always be solved by partial fractions.</p>
Answer 28: <p><strong>Method 1:-</strong><span class="math-container">$$\frac{cosx}{cosx+sinx}=\frac{cos(x-π/4 +π/4)}{\sqrt {2}cos(x-π/4)}=\frac{cos(x-π/4)-sin(x-π/4)}{2cos(x-π/4)}$$</span></p>
<p><strong>Method 2:-</strong> Write the numerator</p>
<p><span class="math-container">$cosx=A(cosx+sinx)+B\frac{d}{dx}(cosx+sinx)$</span></p>
<p>to find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> then the answer will be <span class="math-container">$Ax+Blog|cosx+sinx|+c$</span></p>
Answer 29: <p>HINT:</p>
<p>For $I=\displaystyle\int\dfrac{a\cos x+b\sin x}{c\cos x+d\sin x}dx,$</p>
<p>write $a\cos x+b\sin x=A(c\cos x+d\sin x)+B\cdot\dfrac{d(c\cos x+d\sin x)}{dx}$ where $A,B$ are arbitrary constants so that</p>
<p>$I=\displaystyle A\int\ dx+B\int\dfrac{d(c\cos x+d\sin x)}{c\cos x+d\sin x}$</p>
<p>Compare the coefficients of $\cos x,\sin x$ to find $A,B$</p>
Answer 30: <p>Dividing numerator and denominator by $\cos x$ yields</p>
<p>$$\frac{dx}{\tan x+1}$$</p>
<p>Now substitute $x=\tan^{-1}t,dx=\dfrac{dt}{1+t^2}$ to get</p>
<p>$$\int\frac{dt}{(t+1)(t^2+1)}$$</p>
<p>which can now be solved with partial fractions. Similar to the other method but hopefully easier to see.</p>
|
Asymptotic expansion of $I_n=\int_{0}^{1}\frac{1}{1+x^n} dx$ | 2,106,474 | 5 | https://math.stackexchange.com/questions/2106474/asymptotic-expansion-of-i-n-int-01-frac11xn-dx | <p>The problem is to find the asymptotic expansion of $I_n=\int_{0}^{1}\frac{1}{1+x^n} dx$ (at least the first 3 terms).</p>
<p>By using some simple bounding, I first showed that $I_n$ tends to $1$. Then I calculated the limit $n(1-I_n)$ using integration by parts and some more bounding. At the end I found:</p>
<p>$I_n=1-\frac{\ln(2)}{n}+o(\frac{1}{n})$</p>
<p>I guess there is another way to find this result by invoking the expansion of $\frac{1}{1+x^n}$. This is more annoying, because you have to justify that you can switch the integral inside and the infinite sum, and at the end you will get the following sum</p>
<p>$I_n\stackrel{?}{=}\sum_{k=0}^{\infty}\frac{(-1)^{k}}{kn+1}=1-\frac{1}{n}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k+\frac{1}{n}}$</p>
<p>In this case, one has to justify that the little $\frac{1}{n}$ doesn't create any problems and that we have in fact:</p>
<p>$\lim_{n \to \infty} \sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k+\frac{1}{n}}\stackrel{?}{=}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}=\ln(2)$</p>
<p>So my question is if the second approach can be made rigorous, and <strong>how can we find the third term in the asymptotic expansion of $I_n$?</strong> Also, is there a standard method to solve these types of problems?</p>
| [
{
"answer_id": 2106504,
"score": 6,
"text": "<p>The second approach can be made rigorous: For every partial sum we have</p>\n\n<p>$$0 \\leqslant \\sum_{k = 0}^m (-1)^k x^{kn} \\leqslant 1,$$</p>\n\n<p>since $0 \\leqslant x^{(k+1)n} \\leqslant x^{kn} \\leqslant 1$ for $x\\in [0,1]$ and all $k\\in \\mathbb{N}$ if $n > 0$. If you can use it, just mumble \"dominated convergence\", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\\varepsilon]$ for every $\\varepsilon > 0$ to justify</p>\n\n<p>$$I_n = \\sum_{k = 0}^{\\infty} \\frac{(-1)^k}{kn+1} = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn+1}.$$</p>\n\n<p>Now we can approximate the $\\frac{1}{kn+1}$ with the simpler to handle $\\frac{1}{kn}$:</p>\n\n<p>$$I_n = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn} + \\sum_{k = 1}^{\\infty} (-1)^{k+1}\\biggl(\\frac{1}{kn} - \\frac{1}{kn+1}\\biggr).$$</p>\n\n<p>We know that the first series evaluates to $\\frac{\\ln 2}{n}$, and the remaining sum can be handled similarly:</p>\n\n<p>$$\\frac{1}{kn} - \\frac{1}{kn+1} = \\frac{1}{kn(kn+1)} = \\frac{1}{(kn)^2} - \\frac{1}{(kn)^2(kn+1)},$$</p>\n\n<p>and</p>\n\n<p>$$\\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{(kn)^2} = \\frac{\\pi^2}{12n^2}$$</p>\n\n<p>gives the next term in the expansion.</p>\n\n<p>This can be continued as far as one desires, leading to</p>\n\n<p>$$I:n = 1 - \\frac{\\ln 2}{n} + \\sum_{m = 2}^{\\infty} (-1)^m\\bigl(1-2^{1-m}\\bigr)\\frac{\\zeta(m)}{n^m}.$$</p>\n"
},
{
"answer_id": 2106759,
"score": 3,
"text": "<p>Note that we can write</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1\\frac{1}{1+x^n}\\,dx\\\\\\\\\n&=1-\\frac1n\\int_0^1\\frac{x^{1/n}}{1+x}\\,dx\\tag 1\n\\end{align}$$</span></p>\n<hr />\n<blockquote>\n<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class=\"math-container\">$n$</span>.</strong></p>\n</blockquote>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$u=x^{1/n}$</span> and <span class=\"math-container\">$v=\\log(1+x)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\int_0^1 x^{1/n}\\frac{\\log(1+x)}{x}\\,dx \\tag2 $$</span></p>\n<p>Then, integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\underbrace{\\int_0^1 \\frac{\\log(1+x)}{x}\\,dx}_{\\eta(2)=-\\text{Li}_2(-1)=\\pi^2/12}-\\frac1{n^3}\\int_0^1 x^{-1+1/n}\\underbrace{\\int_0^x \\frac{\\log(1+x')}{x'}\\,dx'}_{\\log(1+x) \\le -\\text{Li}_2(-x)\\le x}\\,dx\\\\\\\\\n&=1-\\frac{\\log(2)}{n}+\\frac{\\pi^2}{12n^2}+O\\left(\\frac{1}{n^3}\\right)\n\\end{align}$$</span></p>\n<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class=\"math-container\">$-\\text{Li}_\\ell(-1)=\\eta(\\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\sum_{k=2}^\\infty \\frac{(-1)^{k}\\eta(k)}{n^k}$$</span></p>\n<p>which agrees with the result reported by @DanielFischer.</p>\n<hr />\n<blockquote>\n<p><strong>PART <span class=\"math-container\">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>\n</blockquote>\n<p>Note that we have for <span class=\"math-container\">$n\\ge 1$</span></p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1 \\frac{1}{1+x^n}\\,dx\\\\\\\\\n&\\int_0^1 \\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\int_0^{1-\\delta}\\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\tag3 \n\\end{align}$$</span></p>\n<p>Since the series in <span class=\"math-container\">$(3)$</span> converges uniformly on <span class=\"math-container\">$[0,1-\\delta]$</span> for <span class=\"math-container\">$0<\\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty (-1)^k\\int_0^{1-\\delta}x^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty \\frac{(-1)^k(1-\\delta)^{nk+1}}{1+nk} \\tag 4\n\\end{align}$$</span></p>\n<p>For each <span class=\"math-container\">$n$</span>, we have <span class=\"math-container\">$\\left|\\sum_{k=1}^N (-1)^k\\right|\\le 1$</span> for all <span class=\"math-container\">$N$</span>, <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class=\"math-container\">$\\delta \\in [0,1]$</span>, and <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}\\to 0$</span> uniformly for <span class=\"math-container\">$\\delta\\in[0,1]$</span>.</p>\n<p>Applying <a href=\"https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem\" rel=\"nofollow noreferrer\">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class=\"math-container\">$(4)$</span> converges uniformly for <span class=\"math-container\">$\\delta\\in [0,1]$</span>. And inasmuch as <span class=\"math-container\">$\\frac{(-1)^k(1-\\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class=\"math-container\">$\\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>\n<p><span class=\"math-container\">$$I_n=\\sum_{k=0}^\\infty\\frac{(-1)^k}{1+nk}$$</span></p>\n"
},
{
"answer_id": 2106514,
"score": 0,
"text": "<p>$I_n\n=\\int_{0}^{1}\\frac{1}{1+x^n} dx\n$</p>\n\n<p>Since,\nfor any integer $m$,\n$\\frac{1+x^{2m+1}}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k\n$,\n$\\frac{1}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k-\\frac{x^{2m+1}}{1+x}\n$,\nso\n$\\frac{1}{1+x^n}\n=\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\n$.</p>\n\n<p>Integrating,</p>\n\n<p>$\\begin{array}\\\\\nI_n\n&=\\int_{0}^{1}\\frac{1}{1+x^n} dx\\\\\n&=\\int_{0}^{1}\\left(\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\\right) dx\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\int_{0}^{1} x^{nk}dx-\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}\n\\qquad\\text{where } J_{n(2m+1)} =\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} \\frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\\\\n\\end{array}\n$</p>\n\n<p>Since\n$1 \\le 1+x \\le 2$,\n$J_{n(2m+1)}\n=\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\n=c\\int_{0}^{1}x^{n(2m+1)}dx\n=\\frac{c}{n(2m+1)+1}\n$\nwhere\n$\\frac12 \\le c \\le 1$.</p>\n\n<p>Therefore the series converges,\nso\n$I_n\n=\\sum_{k=0}^{\\infty} \\frac{(-1)^k}{nk+1}\n$.</p>\n\n<p>To get an expression\nfor this sum,\nnote that it is\na multisection of\n$\\sum_{k=1}^{\\infty} \\frac{(-1)^k}{k}\n$.</p>\n"
},
{
"answer_id": 2106898,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>The integral can be expressed in closed form in terms of\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_258.htm\" rel=\"nofollow noreferrer\">Digamma Functions $\\Psi$</a>.\n Namely,</p>\n</blockquote>\n\n<p>\\begin{align}\nI_{n} & \\equiv \\int_{0}^{1}{\\dd x \\over 1 + x^{n}} =\n\\int_{0}^{1}{1 - x^{n} \\over 1 - x^{2n}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2n}\\,\\,\\, \\mapsto\\ x}{=}\\,\\,\\,\n{1 \\over 2n}\n\\int_{0}^{1}{x^{1/\\pars{2n} - 1}\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\int_{0}^{1}{1\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x -\n\\int_{0}^{1}{1 - x^{1/\\pars{2n} - 1} \\over 1 - x}\\,\\dd x}\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 \\over 2n}} =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}} + {1 \\over 1/\\pars{2n}}}\n\\\\[5mm] & =\n1 + {1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}}}\n\\\\[1cm] & =\n1 + \\underbrace{\\Psi\\pars{1/2} - \\Psi\\pars{1} \\over 2}_{\\ds{-\\ln\\pars{2}}}\\\n\\,{1 \\over n} +\n\\underbrace{\\Psi\\,'\\pars{1/2} - \\Psi\\,'\\pars{1} \\over 4}\n_{\\ds{\\pi^{2} \\over 12}}\\,{1 \\over n^{2}} + \\mrm{O}\\pars{1 \\over n^{3}}\n\\end{align}\n$$\n\\bbx{\\ds{%\n\\int_{0}^{1}{\\dd x \\over 1 + x^{n}} \\sim\n1 - {\\ln\\pars{2} \\over n} + {\\pi^{2}/12 \\over n^{2}}\n\\quad\\mbox{as}\\ n \\to \\infty}}\n$$</p>\n\n<blockquote>\n <p>$\\ds{\\Psi\\ \\mbox{and}\\ \\Psi\\,'}$ values for arguments $\\ds{1/2}$ and $1$ are given at the above mentioned link.</p>\n</blockquote>\n"
},
{
"answer_id": 2106504,
"score": 6,
"text": "<p>The second approach can be made rigorous: For every partial sum we have</p>\n\n<p>$$0 \\leqslant \\sum_{k = 0}^m (-1)^k x^{kn} \\leqslant 1,$$</p>\n\n<p>since $0 \\leqslant x^{(k+1)n} \\leqslant x^{kn} \\leqslant 1$ for $x\\in [0,1]$ and all $k\\in \\mathbb{N}$ if $n > 0$. If you can use it, just mumble \"dominated convergence\", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\\varepsilon]$ for every $\\varepsilon > 0$ to justify</p>\n\n<p>$$I_n = \\sum_{k = 0}^{\\infty} \\frac{(-1)^k}{kn+1} = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn+1}.$$</p>\n\n<p>Now we can approximate the $\\frac{1}{kn+1}$ with the simpler to handle $\\frac{1}{kn}$:</p>\n\n<p>$$I_n = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn} + \\sum_{k = 1}^{\\infty} (-1)^{k+1}\\biggl(\\frac{1}{kn} - \\frac{1}{kn+1}\\biggr).$$</p>\n\n<p>We know that the first series evaluates to $\\frac{\\ln 2}{n}$, and the remaining sum can be handled similarly:</p>\n\n<p>$$\\frac{1}{kn} - \\frac{1}{kn+1} = \\frac{1}{kn(kn+1)} = \\frac{1}{(kn)^2} - \\frac{1}{(kn)^2(kn+1)},$$</p>\n\n<p>and</p>\n\n<p>$$\\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{(kn)^2} = \\frac{\\pi^2}{12n^2}$$</p>\n\n<p>gives the next term in the expansion.</p>\n\n<p>This can be continued as far as one desires, leading to</p>\n\n<p>$$I:n = 1 - \\frac{\\ln 2}{n} + \\sum_{m = 2}^{\\infty} (-1)^m\\bigl(1-2^{1-m}\\bigr)\\frac{\\zeta(m)}{n^m}.$$</p>\n"
},
{
"answer_id": 2106759,
"score": 3,
"text": "<p>Note that we can write</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1\\frac{1}{1+x^n}\\,dx\\\\\\\\\n&=1-\\frac1n\\int_0^1\\frac{x^{1/n}}{1+x}\\,dx\\tag 1\n\\end{align}$$</span></p>\n<hr />\n<blockquote>\n<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class=\"math-container\">$n$</span>.</strong></p>\n</blockquote>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$u=x^{1/n}$</span> and <span class=\"math-container\">$v=\\log(1+x)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\int_0^1 x^{1/n}\\frac{\\log(1+x)}{x}\\,dx \\tag2 $$</span></p>\n<p>Then, integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\underbrace{\\int_0^1 \\frac{\\log(1+x)}{x}\\,dx}_{\\eta(2)=-\\text{Li}_2(-1)=\\pi^2/12}-\\frac1{n^3}\\int_0^1 x^{-1+1/n}\\underbrace{\\int_0^x \\frac{\\log(1+x')}{x'}\\,dx'}_{\\log(1+x) \\le -\\text{Li}_2(-x)\\le x}\\,dx\\\\\\\\\n&=1-\\frac{\\log(2)}{n}+\\frac{\\pi^2}{12n^2}+O\\left(\\frac{1}{n^3}\\right)\n\\end{align}$$</span></p>\n<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class=\"math-container\">$-\\text{Li}_\\ell(-1)=\\eta(\\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\sum_{k=2}^\\infty \\frac{(-1)^{k}\\eta(k)}{n^k}$$</span></p>\n<p>which agrees with the result reported by @DanielFischer.</p>\n<hr />\n<blockquote>\n<p><strong>PART <span class=\"math-container\">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>\n</blockquote>\n<p>Note that we have for <span class=\"math-container\">$n\\ge 1$</span></p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1 \\frac{1}{1+x^n}\\,dx\\\\\\\\\n&\\int_0^1 \\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\int_0^{1-\\delta}\\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\tag3 \n\\end{align}$$</span></p>\n<p>Since the series in <span class=\"math-container\">$(3)$</span> converges uniformly on <span class=\"math-container\">$[0,1-\\delta]$</span> for <span class=\"math-container\">$0<\\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty (-1)^k\\int_0^{1-\\delta}x^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty \\frac{(-1)^k(1-\\delta)^{nk+1}}{1+nk} \\tag 4\n\\end{align}$$</span></p>\n<p>For each <span class=\"math-container\">$n$</span>, we have <span class=\"math-container\">$\\left|\\sum_{k=1}^N (-1)^k\\right|\\le 1$</span> for all <span class=\"math-container\">$N$</span>, <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class=\"math-container\">$\\delta \\in [0,1]$</span>, and <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}\\to 0$</span> uniformly for <span class=\"math-container\">$\\delta\\in[0,1]$</span>.</p>\n<p>Applying <a href=\"https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem\" rel=\"nofollow noreferrer\">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class=\"math-container\">$(4)$</span> converges uniformly for <span class=\"math-container\">$\\delta\\in [0,1]$</span>. And inasmuch as <span class=\"math-container\">$\\frac{(-1)^k(1-\\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class=\"math-container\">$\\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>\n<p><span class=\"math-container\">$$I_n=\\sum_{k=0}^\\infty\\frac{(-1)^k}{1+nk}$$</span></p>\n"
},
{
"answer_id": 2106514,
"score": 0,
"text": "<p>$I_n\n=\\int_{0}^{1}\\frac{1}{1+x^n} dx\n$</p>\n\n<p>Since,\nfor any integer $m$,\n$\\frac{1+x^{2m+1}}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k\n$,\n$\\frac{1}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k-\\frac{x^{2m+1}}{1+x}\n$,\nso\n$\\frac{1}{1+x^n}\n=\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\n$.</p>\n\n<p>Integrating,</p>\n\n<p>$\\begin{array}\\\\\nI_n\n&=\\int_{0}^{1}\\frac{1}{1+x^n} dx\\\\\n&=\\int_{0}^{1}\\left(\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\\right) dx\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\int_{0}^{1} x^{nk}dx-\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}\n\\qquad\\text{where } J_{n(2m+1)} =\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} \\frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\\\\n\\end{array}\n$</p>\n\n<p>Since\n$1 \\le 1+x \\le 2$,\n$J_{n(2m+1)}\n=\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\n=c\\int_{0}^{1}x^{n(2m+1)}dx\n=\\frac{c}{n(2m+1)+1}\n$\nwhere\n$\\frac12 \\le c \\le 1$.</p>\n\n<p>Therefore the series converges,\nso\n$I_n\n=\\sum_{k=0}^{\\infty} \\frac{(-1)^k}{nk+1}\n$.</p>\n\n<p>To get an expression\nfor this sum,\nnote that it is\na multisection of\n$\\sum_{k=1}^{\\infty} \\frac{(-1)^k}{k}\n$.</p>\n"
},
{
"answer_id": 2106898,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>The integral can be expressed in closed form in terms of\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_258.htm\" rel=\"nofollow noreferrer\">Digamma Functions $\\Psi$</a>.\n Namely,</p>\n</blockquote>\n\n<p>\\begin{align}\nI_{n} & \\equiv \\int_{0}^{1}{\\dd x \\over 1 + x^{n}} =\n\\int_{0}^{1}{1 - x^{n} \\over 1 - x^{2n}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2n}\\,\\,\\, \\mapsto\\ x}{=}\\,\\,\\,\n{1 \\over 2n}\n\\int_{0}^{1}{x^{1/\\pars{2n} - 1}\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\int_{0}^{1}{1\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x -\n\\int_{0}^{1}{1 - x^{1/\\pars{2n} - 1} \\over 1 - x}\\,\\dd x}\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 \\over 2n}} =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}} + {1 \\over 1/\\pars{2n}}}\n\\\\[5mm] & =\n1 + {1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}}}\n\\\\[1cm] & =\n1 + \\underbrace{\\Psi\\pars{1/2} - \\Psi\\pars{1} \\over 2}_{\\ds{-\\ln\\pars{2}}}\\\n\\,{1 \\over n} +\n\\underbrace{\\Psi\\,'\\pars{1/2} - \\Psi\\,'\\pars{1} \\over 4}\n_{\\ds{\\pi^{2} \\over 12}}\\,{1 \\over n^{2}} + \\mrm{O}\\pars{1 \\over n^{3}}\n\\end{align}\n$$\n\\bbx{\\ds{%\n\\int_{0}^{1}{\\dd x \\over 1 + x^{n}} \\sim\n1 - {\\ln\\pars{2} \\over n} + {\\pi^{2}/12 \\over n^{2}}\n\\quad\\mbox{as}\\ n \\to \\infty}}\n$$</p>\n\n<blockquote>\n <p>$\\ds{\\Psi\\ \\mbox{and}\\ \\Psi\\,'}$ values for arguments $\\ds{1/2}$ and $1$ are given at the above mentioned link.</p>\n</blockquote>\n"
},
{
"answer_id": 2106504,
"score": 6,
"text": "<p>The second approach can be made rigorous: For every partial sum we have</p>\n\n<p>$$0 \\leqslant \\sum_{k = 0}^m (-1)^k x^{kn} \\leqslant 1,$$</p>\n\n<p>since $0 \\leqslant x^{(k+1)n} \\leqslant x^{kn} \\leqslant 1$ for $x\\in [0,1]$ and all $k\\in \\mathbb{N}$ if $n > 0$. If you can use it, just mumble \"dominated convergence\", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\\varepsilon]$ for every $\\varepsilon > 0$ to justify</p>\n\n<p>$$I_n = \\sum_{k = 0}^{\\infty} \\frac{(-1)^k}{kn+1} = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn+1}.$$</p>\n\n<p>Now we can approximate the $\\frac{1}{kn+1}$ with the simpler to handle $\\frac{1}{kn}$:</p>\n\n<p>$$I_n = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn} + \\sum_{k = 1}^{\\infty} (-1)^{k+1}\\biggl(\\frac{1}{kn} - \\frac{1}{kn+1}\\biggr).$$</p>\n\n<p>We know that the first series evaluates to $\\frac{\\ln 2}{n}$, and the remaining sum can be handled similarly:</p>\n\n<p>$$\\frac{1}{kn} - \\frac{1}{kn+1} = \\frac{1}{kn(kn+1)} = \\frac{1}{(kn)^2} - \\frac{1}{(kn)^2(kn+1)},$$</p>\n\n<p>and</p>\n\n<p>$$\\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{(kn)^2} = \\frac{\\pi^2}{12n^2}$$</p>\n\n<p>gives the next term in the expansion.</p>\n\n<p>This can be continued as far as one desires, leading to</p>\n\n<p>$$I:n = 1 - \\frac{\\ln 2}{n} + \\sum_{m = 2}^{\\infty} (-1)^m\\bigl(1-2^{1-m}\\bigr)\\frac{\\zeta(m)}{n^m}.$$</p>\n"
},
{
"answer_id": 2106759,
"score": 3,
"text": "<p>Note that we can write</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1\\frac{1}{1+x^n}\\,dx\\\\\\\\\n&=1-\\frac1n\\int_0^1\\frac{x^{1/n}}{1+x}\\,dx\\tag 1\n\\end{align}$$</span></p>\n<hr />\n<blockquote>\n<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class=\"math-container\">$n$</span>.</strong></p>\n</blockquote>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$u=x^{1/n}$</span> and <span class=\"math-container\">$v=\\log(1+x)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\int_0^1 x^{1/n}\\frac{\\log(1+x)}{x}\\,dx \\tag2 $$</span></p>\n<p>Then, integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\underbrace{\\int_0^1 \\frac{\\log(1+x)}{x}\\,dx}_{\\eta(2)=-\\text{Li}_2(-1)=\\pi^2/12}-\\frac1{n^3}\\int_0^1 x^{-1+1/n}\\underbrace{\\int_0^x \\frac{\\log(1+x')}{x'}\\,dx'}_{\\log(1+x) \\le -\\text{Li}_2(-x)\\le x}\\,dx\\\\\\\\\n&=1-\\frac{\\log(2)}{n}+\\frac{\\pi^2}{12n^2}+O\\left(\\frac{1}{n^3}\\right)\n\\end{align}$$</span></p>\n<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class=\"math-container\">$-\\text{Li}_\\ell(-1)=\\eta(\\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\sum_{k=2}^\\infty \\frac{(-1)^{k}\\eta(k)}{n^k}$$</span></p>\n<p>which agrees with the result reported by @DanielFischer.</p>\n<hr />\n<blockquote>\n<p><strong>PART <span class=\"math-container\">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>\n</blockquote>\n<p>Note that we have for <span class=\"math-container\">$n\\ge 1$</span></p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1 \\frac{1}{1+x^n}\\,dx\\\\\\\\\n&\\int_0^1 \\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\int_0^{1-\\delta}\\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\tag3 \n\\end{align}$$</span></p>\n<p>Since the series in <span class=\"math-container\">$(3)$</span> converges uniformly on <span class=\"math-container\">$[0,1-\\delta]$</span> for <span class=\"math-container\">$0<\\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty (-1)^k\\int_0^{1-\\delta}x^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty \\frac{(-1)^k(1-\\delta)^{nk+1}}{1+nk} \\tag 4\n\\end{align}$$</span></p>\n<p>For each <span class=\"math-container\">$n$</span>, we have <span class=\"math-container\">$\\left|\\sum_{k=1}^N (-1)^k\\right|\\le 1$</span> for all <span class=\"math-container\">$N$</span>, <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class=\"math-container\">$\\delta \\in [0,1]$</span>, and <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}\\to 0$</span> uniformly for <span class=\"math-container\">$\\delta\\in[0,1]$</span>.</p>\n<p>Applying <a href=\"https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem\" rel=\"nofollow noreferrer\">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class=\"math-container\">$(4)$</span> converges uniformly for <span class=\"math-container\">$\\delta\\in [0,1]$</span>. And inasmuch as <span class=\"math-container\">$\\frac{(-1)^k(1-\\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class=\"math-container\">$\\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>\n<p><span class=\"math-container\">$$I_n=\\sum_{k=0}^\\infty\\frac{(-1)^k}{1+nk}$$</span></p>\n"
},
{
"answer_id": 2106514,
"score": 0,
"text": "<p>$I_n\n=\\int_{0}^{1}\\frac{1}{1+x^n} dx\n$</p>\n\n<p>Since,\nfor any integer $m$,\n$\\frac{1+x^{2m+1}}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k\n$,\n$\\frac{1}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k-\\frac{x^{2m+1}}{1+x}\n$,\nso\n$\\frac{1}{1+x^n}\n=\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\n$.</p>\n\n<p>Integrating,</p>\n\n<p>$\\begin{array}\\\\\nI_n\n&=\\int_{0}^{1}\\frac{1}{1+x^n} dx\\\\\n&=\\int_{0}^{1}\\left(\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\\right) dx\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\int_{0}^{1} x^{nk}dx-\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}\n\\qquad\\text{where } J_{n(2m+1)} =\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} \\frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\\\\n\\end{array}\n$</p>\n\n<p>Since\n$1 \\le 1+x \\le 2$,\n$J_{n(2m+1)}\n=\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\n=c\\int_{0}^{1}x^{n(2m+1)}dx\n=\\frac{c}{n(2m+1)+1}\n$\nwhere\n$\\frac12 \\le c \\le 1$.</p>\n\n<p>Therefore the series converges,\nso\n$I_n\n=\\sum_{k=0}^{\\infty} \\frac{(-1)^k}{nk+1}\n$.</p>\n\n<p>To get an expression\nfor this sum,\nnote that it is\na multisection of\n$\\sum_{k=1}^{\\infty} \\frac{(-1)^k}{k}\n$.</p>\n"
},
{
"answer_id": 2106898,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>The integral can be expressed in closed form in terms of\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_258.htm\" rel=\"nofollow noreferrer\">Digamma Functions $\\Psi$</a>.\n Namely,</p>\n</blockquote>\n\n<p>\\begin{align}\nI_{n} & \\equiv \\int_{0}^{1}{\\dd x \\over 1 + x^{n}} =\n\\int_{0}^{1}{1 - x^{n} \\over 1 - x^{2n}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2n}\\,\\,\\, \\mapsto\\ x}{=}\\,\\,\\,\n{1 \\over 2n}\n\\int_{0}^{1}{x^{1/\\pars{2n} - 1}\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\int_{0}^{1}{1\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x -\n\\int_{0}^{1}{1 - x^{1/\\pars{2n} - 1} \\over 1 - x}\\,\\dd x}\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 \\over 2n}} =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}} + {1 \\over 1/\\pars{2n}}}\n\\\\[5mm] & =\n1 + {1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}}}\n\\\\[1cm] & =\n1 + \\underbrace{\\Psi\\pars{1/2} - \\Psi\\pars{1} \\over 2}_{\\ds{-\\ln\\pars{2}}}\\\n\\,{1 \\over n} +\n\\underbrace{\\Psi\\,'\\pars{1/2} - \\Psi\\,'\\pars{1} \\over 4}\n_{\\ds{\\pi^{2} \\over 12}}\\,{1 \\over n^{2}} + \\mrm{O}\\pars{1 \\over n^{3}}\n\\end{align}\n$$\n\\bbx{\\ds{%\n\\int_{0}^{1}{\\dd x \\over 1 + x^{n}} \\sim\n1 - {\\ln\\pars{2} \\over n} + {\\pi^{2}/12 \\over n^{2}}\n\\quad\\mbox{as}\\ n \\to \\infty}}\n$$</p>\n\n<blockquote>\n <p>$\\ds{\\Psi\\ \\mbox{and}\\ \\Psi\\,'}$ values for arguments $\\ds{1/2}$ and $1$ are given at the above mentioned link.</p>\n</blockquote>\n"
},
{
"answer_id": 2106504,
"score": 6,
"text": "<p>The second approach can be made rigorous: For every partial sum we have</p>\n\n<p>$$0 \\leqslant \\sum_{k = 0}^m (-1)^k x^{kn} \\leqslant 1,$$</p>\n\n<p>since $0 \\leqslant x^{(k+1)n} \\leqslant x^{kn} \\leqslant 1$ for $x\\in [0,1]$ and all $k\\in \\mathbb{N}$ if $n > 0$. If you can use it, just mumble \"dominated convergence\", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\\varepsilon]$ for every $\\varepsilon > 0$ to justify</p>\n\n<p>$$I_n = \\sum_{k = 0}^{\\infty} \\frac{(-1)^k}{kn+1} = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn+1}.$$</p>\n\n<p>Now we can approximate the $\\frac{1}{kn+1}$ with the simpler to handle $\\frac{1}{kn}$:</p>\n\n<p>$$I_n = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn} + \\sum_{k = 1}^{\\infty} (-1)^{k+1}\\biggl(\\frac{1}{kn} - \\frac{1}{kn+1}\\biggr).$$</p>\n\n<p>We know that the first series evaluates to $\\frac{\\ln 2}{n}$, and the remaining sum can be handled similarly:</p>\n\n<p>$$\\frac{1}{kn} - \\frac{1}{kn+1} = \\frac{1}{kn(kn+1)} = \\frac{1}{(kn)^2} - \\frac{1}{(kn)^2(kn+1)},$$</p>\n\n<p>and</p>\n\n<p>$$\\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{(kn)^2} = \\frac{\\pi^2}{12n^2}$$</p>\n\n<p>gives the next term in the expansion.</p>\n\n<p>This can be continued as far as one desires, leading to</p>\n\n<p>$$I:n = 1 - \\frac{\\ln 2}{n} + \\sum_{m = 2}^{\\infty} (-1)^m\\bigl(1-2^{1-m}\\bigr)\\frac{\\zeta(m)}{n^m}.$$</p>\n"
},
{
"answer_id": 2106759,
"score": 3,
"text": "<p>Note that we can write</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1\\frac{1}{1+x^n}\\,dx\\\\\\\\\n&=1-\\frac1n\\int_0^1\\frac{x^{1/n}}{1+x}\\,dx\\tag 1\n\\end{align}$$</span></p>\n<hr />\n<blockquote>\n<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class=\"math-container\">$n$</span>.</strong></p>\n</blockquote>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$u=x^{1/n}$</span> and <span class=\"math-container\">$v=\\log(1+x)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\int_0^1 x^{1/n}\\frac{\\log(1+x)}{x}\\,dx \\tag2 $$</span></p>\n<p>Then, integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\underbrace{\\int_0^1 \\frac{\\log(1+x)}{x}\\,dx}_{\\eta(2)=-\\text{Li}_2(-1)=\\pi^2/12}-\\frac1{n^3}\\int_0^1 x^{-1+1/n}\\underbrace{\\int_0^x \\frac{\\log(1+x')}{x'}\\,dx'}_{\\log(1+x) \\le -\\text{Li}_2(-x)\\le x}\\,dx\\\\\\\\\n&=1-\\frac{\\log(2)}{n}+\\frac{\\pi^2}{12n^2}+O\\left(\\frac{1}{n^3}\\right)\n\\end{align}$$</span></p>\n<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class=\"math-container\">$-\\text{Li}_\\ell(-1)=\\eta(\\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\sum_{k=2}^\\infty \\frac{(-1)^{k}\\eta(k)}{n^k}$$</span></p>\n<p>which agrees with the result reported by @DanielFischer.</p>\n<hr />\n<blockquote>\n<p><strong>PART <span class=\"math-container\">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>\n</blockquote>\n<p>Note that we have for <span class=\"math-container\">$n\\ge 1$</span></p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1 \\frac{1}{1+x^n}\\,dx\\\\\\\\\n&\\int_0^1 \\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\int_0^{1-\\delta}\\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\tag3 \n\\end{align}$$</span></p>\n<p>Since the series in <span class=\"math-container\">$(3)$</span> converges uniformly on <span class=\"math-container\">$[0,1-\\delta]$</span> for <span class=\"math-container\">$0<\\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty (-1)^k\\int_0^{1-\\delta}x^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty \\frac{(-1)^k(1-\\delta)^{nk+1}}{1+nk} \\tag 4\n\\end{align}$$</span></p>\n<p>For each <span class=\"math-container\">$n$</span>, we have <span class=\"math-container\">$\\left|\\sum_{k=1}^N (-1)^k\\right|\\le 1$</span> for all <span class=\"math-container\">$N$</span>, <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class=\"math-container\">$\\delta \\in [0,1]$</span>, and <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}\\to 0$</span> uniformly for <span class=\"math-container\">$\\delta\\in[0,1]$</span>.</p>\n<p>Applying <a href=\"https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem\" rel=\"nofollow noreferrer\">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class=\"math-container\">$(4)$</span> converges uniformly for <span class=\"math-container\">$\\delta\\in [0,1]$</span>. And inasmuch as <span class=\"math-container\">$\\frac{(-1)^k(1-\\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class=\"math-container\">$\\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>\n<p><span class=\"math-container\">$$I_n=\\sum_{k=0}^\\infty\\frac{(-1)^k}{1+nk}$$</span></p>\n"
},
{
"answer_id": 2106514,
"score": 0,
"text": "<p>$I_n\n=\\int_{0}^{1}\\frac{1}{1+x^n} dx\n$</p>\n\n<p>Since,\nfor any integer $m$,\n$\\frac{1+x^{2m+1}}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k\n$,\n$\\frac{1}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k-\\frac{x^{2m+1}}{1+x}\n$,\nso\n$\\frac{1}{1+x^n}\n=\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\n$.</p>\n\n<p>Integrating,</p>\n\n<p>$\\begin{array}\\\\\nI_n\n&=\\int_{0}^{1}\\frac{1}{1+x^n} dx\\\\\n&=\\int_{0}^{1}\\left(\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\\right) dx\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\int_{0}^{1} x^{nk}dx-\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}\n\\qquad\\text{where } J_{n(2m+1)} =\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} \\frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\\\\n\\end{array}\n$</p>\n\n<p>Since\n$1 \\le 1+x \\le 2$,\n$J_{n(2m+1)}\n=\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\n=c\\int_{0}^{1}x^{n(2m+1)}dx\n=\\frac{c}{n(2m+1)+1}\n$\nwhere\n$\\frac12 \\le c \\le 1$.</p>\n\n<p>Therefore the series converges,\nso\n$I_n\n=\\sum_{k=0}^{\\infty} \\frac{(-1)^k}{nk+1}\n$.</p>\n\n<p>To get an expression\nfor this sum,\nnote that it is\na multisection of\n$\\sum_{k=1}^{\\infty} \\frac{(-1)^k}{k}\n$.</p>\n"
},
{
"answer_id": 2106898,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>The integral can be expressed in closed form in terms of\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_258.htm\" rel=\"nofollow noreferrer\">Digamma Functions $\\Psi$</a>.\n Namely,</p>\n</blockquote>\n\n<p>\\begin{align}\nI_{n} & \\equiv \\int_{0}^{1}{\\dd x \\over 1 + x^{n}} =\n\\int_{0}^{1}{1 - x^{n} \\over 1 - x^{2n}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2n}\\,\\,\\, \\mapsto\\ x}{=}\\,\\,\\,\n{1 \\over 2n}\n\\int_{0}^{1}{x^{1/\\pars{2n} - 1}\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\int_{0}^{1}{1\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x -\n\\int_{0}^{1}{1 - x^{1/\\pars{2n} - 1} \\over 1 - x}\\,\\dd x}\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 \\over 2n}} =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}} + {1 \\over 1/\\pars{2n}}}\n\\\\[5mm] & =\n1 + {1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}}}\n\\\\[1cm] & =\n1 + \\underbrace{\\Psi\\pars{1/2} - \\Psi\\pars{1} \\over 2}_{\\ds{-\\ln\\pars{2}}}\\\n\\,{1 \\over n} +\n\\underbrace{\\Psi\\,'\\pars{1/2} - \\Psi\\,'\\pars{1} \\over 4}\n_{\\ds{\\pi^{2} \\over 12}}\\,{1 \\over n^{2}} + \\mrm{O}\\pars{1 \\over n^{3}}\n\\end{align}\n$$\n\\bbx{\\ds{%\n\\int_{0}^{1}{\\dd x \\over 1 + x^{n}} \\sim\n1 - {\\ln\\pars{2} \\over n} + {\\pi^{2}/12 \\over n^{2}}\n\\quad\\mbox{as}\\ n \\to \\infty}}\n$$</p>\n\n<blockquote>\n <p>$\\ds{\\Psi\\ \\mbox{and}\\ \\Psi\\,'}$ values for arguments $\\ds{1/2}$ and $1$ are given at the above mentioned link.</p>\n</blockquote>\n"
},
{
"answer_id": 2106504,
"score": 6,
"text": "<p>The second approach can be made rigorous: For every partial sum we have</p>\n\n<p>$$0 \\leqslant \\sum_{k = 0}^m (-1)^k x^{kn} \\leqslant 1,$$</p>\n\n<p>since $0 \\leqslant x^{(k+1)n} \\leqslant x^{kn} \\leqslant 1$ for $x\\in [0,1]$ and all $k\\in \\mathbb{N}$ if $n > 0$. If you can use it, just mumble \"dominated convergence\", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\\varepsilon]$ for every $\\varepsilon > 0$ to justify</p>\n\n<p>$$I_n = \\sum_{k = 0}^{\\infty} \\frac{(-1)^k}{kn+1} = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn+1}.$$</p>\n\n<p>Now we can approximate the $\\frac{1}{kn+1}$ with the simpler to handle $\\frac{1}{kn}$:</p>\n\n<p>$$I_n = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn} + \\sum_{k = 1}^{\\infty} (-1)^{k+1}\\biggl(\\frac{1}{kn} - \\frac{1}{kn+1}\\biggr).$$</p>\n\n<p>We know that the first series evaluates to $\\frac{\\ln 2}{n}$, and the remaining sum can be handled similarly:</p>\n\n<p>$$\\frac{1}{kn} - \\frac{1}{kn+1} = \\frac{1}{kn(kn+1)} = \\frac{1}{(kn)^2} - \\frac{1}{(kn)^2(kn+1)},$$</p>\n\n<p>and</p>\n\n<p>$$\\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{(kn)^2} = \\frac{\\pi^2}{12n^2}$$</p>\n\n<p>gives the next term in the expansion.</p>\n\n<p>This can be continued as far as one desires, leading to</p>\n\n<p>$$I:n = 1 - \\frac{\\ln 2}{n} + \\sum_{m = 2}^{\\infty} (-1)^m\\bigl(1-2^{1-m}\\bigr)\\frac{\\zeta(m)}{n^m}.$$</p>\n"
},
{
"answer_id": 2106759,
"score": 3,
"text": "<p>Note that we can write</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1\\frac{1}{1+x^n}\\,dx\\\\\\\\\n&=1-\\frac1n\\int_0^1\\frac{x^{1/n}}{1+x}\\,dx\\tag 1\n\\end{align}$$</span></p>\n<hr />\n<blockquote>\n<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class=\"math-container\">$n$</span>.</strong></p>\n</blockquote>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$u=x^{1/n}$</span> and <span class=\"math-container\">$v=\\log(1+x)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\int_0^1 x^{1/n}\\frac{\\log(1+x)}{x}\\,dx \\tag2 $$</span></p>\n<p>Then, integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\underbrace{\\int_0^1 \\frac{\\log(1+x)}{x}\\,dx}_{\\eta(2)=-\\text{Li}_2(-1)=\\pi^2/12}-\\frac1{n^3}\\int_0^1 x^{-1+1/n}\\underbrace{\\int_0^x \\frac{\\log(1+x')}{x'}\\,dx'}_{\\log(1+x) \\le -\\text{Li}_2(-x)\\le x}\\,dx\\\\\\\\\n&=1-\\frac{\\log(2)}{n}+\\frac{\\pi^2}{12n^2}+O\\left(\\frac{1}{n^3}\\right)\n\\end{align}$$</span></p>\n<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class=\"math-container\">$-\\text{Li}_\\ell(-1)=\\eta(\\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\sum_{k=2}^\\infty \\frac{(-1)^{k}\\eta(k)}{n^k}$$</span></p>\n<p>which agrees with the result reported by @DanielFischer.</p>\n<hr />\n<blockquote>\n<p><strong>PART <span class=\"math-container\">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>\n</blockquote>\n<p>Note that we have for <span class=\"math-container\">$n\\ge 1$</span></p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1 \\frac{1}{1+x^n}\\,dx\\\\\\\\\n&\\int_0^1 \\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\int_0^{1-\\delta}\\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\tag3 \n\\end{align}$$</span></p>\n<p>Since the series in <span class=\"math-container\">$(3)$</span> converges uniformly on <span class=\"math-container\">$[0,1-\\delta]$</span> for <span class=\"math-container\">$0<\\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty (-1)^k\\int_0^{1-\\delta}x^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty \\frac{(-1)^k(1-\\delta)^{nk+1}}{1+nk} \\tag 4\n\\end{align}$$</span></p>\n<p>For each <span class=\"math-container\">$n$</span>, we have <span class=\"math-container\">$\\left|\\sum_{k=1}^N (-1)^k\\right|\\le 1$</span> for all <span class=\"math-container\">$N$</span>, <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class=\"math-container\">$\\delta \\in [0,1]$</span>, and <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}\\to 0$</span> uniformly for <span class=\"math-container\">$\\delta\\in[0,1]$</span>.</p>\n<p>Applying <a href=\"https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem\" rel=\"nofollow noreferrer\">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class=\"math-container\">$(4)$</span> converges uniformly for <span class=\"math-container\">$\\delta\\in [0,1]$</span>. And inasmuch as <span class=\"math-container\">$\\frac{(-1)^k(1-\\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class=\"math-container\">$\\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>\n<p><span class=\"math-container\">$$I_n=\\sum_{k=0}^\\infty\\frac{(-1)^k}{1+nk}$$</span></p>\n"
},
{
"answer_id": 2106514,
"score": 0,
"text": "<p>$I_n\n=\\int_{0}^{1}\\frac{1}{1+x^n} dx\n$</p>\n\n<p>Since,\nfor any integer $m$,\n$\\frac{1+x^{2m+1}}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k\n$,\n$\\frac{1}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k-\\frac{x^{2m+1}}{1+x}\n$,\nso\n$\\frac{1}{1+x^n}\n=\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\n$.</p>\n\n<p>Integrating,</p>\n\n<p>$\\begin{array}\\\\\nI_n\n&=\\int_{0}^{1}\\frac{1}{1+x^n} dx\\\\\n&=\\int_{0}^{1}\\left(\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\\right) dx\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\int_{0}^{1} x^{nk}dx-\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}\n\\qquad\\text{where } J_{n(2m+1)} =\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} \\frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\\\\n\\end{array}\n$</p>\n\n<p>Since\n$1 \\le 1+x \\le 2$,\n$J_{n(2m+1)}\n=\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\n=c\\int_{0}^{1}x^{n(2m+1)}dx\n=\\frac{c}{n(2m+1)+1}\n$\nwhere\n$\\frac12 \\le c \\le 1$.</p>\n\n<p>Therefore the series converges,\nso\n$I_n\n=\\sum_{k=0}^{\\infty} \\frac{(-1)^k}{nk+1}\n$.</p>\n\n<p>To get an expression\nfor this sum,\nnote that it is\na multisection of\n$\\sum_{k=1}^{\\infty} \\frac{(-1)^k}{k}\n$.</p>\n"
},
{
"answer_id": 2106898,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>The integral can be expressed in closed form in terms of\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_258.htm\" rel=\"nofollow noreferrer\">Digamma Functions $\\Psi$</a>.\n Namely,</p>\n</blockquote>\n\n<p>\\begin{align}\nI_{n} & \\equiv \\int_{0}^{1}{\\dd x \\over 1 + x^{n}} =\n\\int_{0}^{1}{1 - x^{n} \\over 1 - x^{2n}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2n}\\,\\,\\, \\mapsto\\ x}{=}\\,\\,\\,\n{1 \\over 2n}\n\\int_{0}^{1}{x^{1/\\pars{2n} - 1}\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\int_{0}^{1}{1\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x -\n\\int_{0}^{1}{1 - x^{1/\\pars{2n} - 1} \\over 1 - x}\\,\\dd x}\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 \\over 2n}} =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}} + {1 \\over 1/\\pars{2n}}}\n\\\\[5mm] & =\n1 + {1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}}}\n\\\\[1cm] & =\n1 + \\underbrace{\\Psi\\pars{1/2} - \\Psi\\pars{1} \\over 2}_{\\ds{-\\ln\\pars{2}}}\\\n\\,{1 \\over n} +\n\\underbrace{\\Psi\\,'\\pars{1/2} - \\Psi\\,'\\pars{1} \\over 4}\n_{\\ds{\\pi^{2} \\over 12}}\\,{1 \\over n^{2}} + \\mrm{O}\\pars{1 \\over n^{3}}\n\\end{align}\n$$\n\\bbx{\\ds{%\n\\int_{0}^{1}{\\dd x \\over 1 + x^{n}} \\sim\n1 - {\\ln\\pars{2} \\over n} + {\\pi^{2}/12 \\over n^{2}}\n\\quad\\mbox{as}\\ n \\to \\infty}}\n$$</p>\n\n<blockquote>\n <p>$\\ds{\\Psi\\ \\mbox{and}\\ \\Psi\\,'}$ values for arguments $\\ds{1/2}$ and $1$ are given at the above mentioned link.</p>\n</blockquote>\n"
},
{
"answer_id": 2106504,
"score": 6,
"text": "<p>The second approach can be made rigorous: For every partial sum we have</p>\n\n<p>$$0 \\leqslant \\sum_{k = 0}^m (-1)^k x^{kn} \\leqslant 1,$$</p>\n\n<p>since $0 \\leqslant x^{(k+1)n} \\leqslant x^{kn} \\leqslant 1$ for $x\\in [0,1]$ and all $k\\in \\mathbb{N}$ if $n > 0$. If you can use it, just mumble \"dominated convergence\", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\\varepsilon]$ for every $\\varepsilon > 0$ to justify</p>\n\n<p>$$I_n = \\sum_{k = 0}^{\\infty} \\frac{(-1)^k}{kn+1} = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn+1}.$$</p>\n\n<p>Now we can approximate the $\\frac{1}{kn+1}$ with the simpler to handle $\\frac{1}{kn}$:</p>\n\n<p>$$I_n = 1 - \\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{kn} + \\sum_{k = 1}^{\\infty} (-1)^{k+1}\\biggl(\\frac{1}{kn} - \\frac{1}{kn+1}\\biggr).$$</p>\n\n<p>We know that the first series evaluates to $\\frac{\\ln 2}{n}$, and the remaining sum can be handled similarly:</p>\n\n<p>$$\\frac{1}{kn} - \\frac{1}{kn+1} = \\frac{1}{kn(kn+1)} = \\frac{1}{(kn)^2} - \\frac{1}{(kn)^2(kn+1)},$$</p>\n\n<p>and</p>\n\n<p>$$\\sum_{k = 1}^{\\infty} \\frac{(-1)^{k+1}}{(kn)^2} = \\frac{\\pi^2}{12n^2}$$</p>\n\n<p>gives the next term in the expansion.</p>\n\n<p>This can be continued as far as one desires, leading to</p>\n\n<p>$$I:n = 1 - \\frac{\\ln 2}{n} + \\sum_{m = 2}^{\\infty} (-1)^m\\bigl(1-2^{1-m}\\bigr)\\frac{\\zeta(m)}{n^m}.$$</p>\n"
},
{
"answer_id": 2106759,
"score": 3,
"text": "<p>Note that we can write</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1\\frac{1}{1+x^n}\\,dx\\\\\\\\\n&=1-\\frac1n\\int_0^1\\frac{x^{1/n}}{1+x}\\,dx\\tag 1\n\\end{align}$$</span></p>\n<hr />\n<blockquote>\n<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class=\"math-container\">$n$</span>.</strong></p>\n</blockquote>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$u=x^{1/n}$</span> and <span class=\"math-container\">$v=\\log(1+x)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\int_0^1 x^{1/n}\\frac{\\log(1+x)}{x}\\,dx \\tag2 $$</span></p>\n<p>Then, integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span>, we obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=1-\\frac{\\log(2)}{n}+\\frac1{n^2}\\underbrace{\\int_0^1 \\frac{\\log(1+x)}{x}\\,dx}_{\\eta(2)=-\\text{Li}_2(-1)=\\pi^2/12}-\\frac1{n^3}\\int_0^1 x^{-1+1/n}\\underbrace{\\int_0^x \\frac{\\log(1+x')}{x'}\\,dx'}_{\\log(1+x) \\le -\\text{Li}_2(-x)\\le x}\\,dx\\\\\\\\\n&=1-\\frac{\\log(2)}{n}+\\frac{\\pi^2}{12n^2}+O\\left(\\frac{1}{n^3}\\right)\n\\end{align}$$</span></p>\n<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class=\"math-container\">$-\\text{Li}_\\ell(-1)=\\eta(\\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>\n<p><span class=\"math-container\">$$I_n=1-\\frac{\\log(2)}{n}+\\sum_{k=2}^\\infty \\frac{(-1)^{k}\\eta(k)}{n^k}$$</span></p>\n<p>which agrees with the result reported by @DanielFischer.</p>\n<hr />\n<blockquote>\n<p><strong>PART <span class=\"math-container\">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>\n</blockquote>\n<p>Note that we have for <span class=\"math-container\">$n\\ge 1$</span></p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\int_0^1 \\frac{1}{1+x^n}\\,dx\\\\\\\\\n&\\int_0^1 \\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\int_0^{1-\\delta}\\sum_{k=0}^\\infty (-1)^kx^{nk}\\,dx\\tag3 \n\\end{align}$$</span></p>\n<p>Since the series in <span class=\"math-container\">$(3)$</span> converges uniformly on <span class=\"math-container\">$[0,1-\\delta]$</span> for <span class=\"math-container\">$0<\\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>\n<p><span class=\"math-container\">$$\\begin{align}\nI_n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty (-1)^k\\int_0^{1-\\delta}x^{nk}\\,dx\\\\\\\\\n&=\\lim_{\\delta\\to 0^+}\\sum_{k=0}^\\infty \\frac{(-1)^k(1-\\delta)^{nk+1}}{1+nk} \\tag 4\n\\end{align}$$</span></p>\n<p>For each <span class=\"math-container\">$n$</span>, we have <span class=\"math-container\">$\\left|\\sum_{k=1}^N (-1)^k\\right|\\le 1$</span> for all <span class=\"math-container\">$N$</span>, <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class=\"math-container\">$\\delta \\in [0,1]$</span>, and <span class=\"math-container\">$\\frac{(1-\\delta)^{1+nk}}{1+nk}\\to 0$</span> uniformly for <span class=\"math-container\">$\\delta\\in[0,1]$</span>.</p>\n<p>Applying <a href=\"https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem\" rel=\"nofollow noreferrer\">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class=\"math-container\">$(4)$</span> converges uniformly for <span class=\"math-container\">$\\delta\\in [0,1]$</span>. And inasmuch as <span class=\"math-container\">$\\frac{(-1)^k(1-\\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class=\"math-container\">$\\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>\n<p><span class=\"math-container\">$$I_n=\\sum_{k=0}^\\infty\\frac{(-1)^k}{1+nk}$$</span></p>\n"
},
{
"answer_id": 2106514,
"score": 0,
"text": "<p>$I_n\n=\\int_{0}^{1}\\frac{1}{1+x^n} dx\n$</p>\n\n<p>Since,\nfor any integer $m$,\n$\\frac{1+x^{2m+1}}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k\n$,\n$\\frac{1}{1+x}\n=\\sum_{k=0}^{2m} (-1)^k x^k-\\frac{x^{2m+1}}{1+x}\n$,\nso\n$\\frac{1}{1+x^n}\n=\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\n$.</p>\n\n<p>Integrating,</p>\n\n<p>$\\begin{array}\\\\\nI_n\n&=\\int_{0}^{1}\\frac{1}{1+x^n} dx\\\\\n&=\\int_{0}^{1}\\left(\\sum_{k=0}^{2m} (-1)^k x^{nk}-\\frac{x^{n(2m+1)}}{1+x}\\right) dx\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\int_{0}^{1} x^{nk}dx-\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} (-1)^k \\frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}\n\\qquad\\text{where } J_{n(2m+1)} =\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\\\\\n&=\\sum_{k=0}^{2m} \\frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\\\\n\\end{array}\n$</p>\n\n<p>Since\n$1 \\le 1+x \\le 2$,\n$J_{n(2m+1)}\n=\\int_{0}^{1}\\frac{x^{n(2m+1)}dx}{1+x}\n=c\\int_{0}^{1}x^{n(2m+1)}dx\n=\\frac{c}{n(2m+1)+1}\n$\nwhere\n$\\frac12 \\le c \\le 1$.</p>\n\n<p>Therefore the series converges,\nso\n$I_n\n=\\sum_{k=0}^{\\infty} \\frac{(-1)^k}{nk+1}\n$.</p>\n\n<p>To get an expression\nfor this sum,\nnote that it is\na multisection of\n$\\sum_{k=1}^{\\infty} \\frac{(-1)^k}{k}\n$.</p>\n"
},
{
"answer_id": 2106898,
"score": 0,
"text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>The integral can be expressed in closed form in terms of\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_258.htm\" rel=\"nofollow noreferrer\">Digamma Functions $\\Psi$</a>.\n Namely,</p>\n</blockquote>\n\n<p>\\begin{align}\nI_{n} & \\equiv \\int_{0}^{1}{\\dd x \\over 1 + x^{n}} =\n\\int_{0}^{1}{1 - x^{n} \\over 1 - x^{2n}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2n}\\,\\,\\, \\mapsto\\ x}{=}\\,\\,\\,\n{1 \\over 2n}\n\\int_{0}^{1}{x^{1/\\pars{2n} - 1}\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\int_{0}^{1}{1\\ -\\ x^{1/\\pars{2n} - 1/2} \\over 1 - x}\\,\\dd x -\n\\int_{0}^{1}{1 - x^{1/\\pars{2n} - 1} \\over 1 - x}\\,\\dd x}\n\\\\[5mm] & =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 \\over 2n}} =\n{1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}} + {1 \\over 1/\\pars{2n}}}\n\\\\[5mm] & =\n1 + {1 \\over 2n}\\bracks{\\Psi\\pars{{1 \\over 2} + {1 \\over 2n}} -\n\\Psi\\pars{1 + {1 \\over 2n}}}\n\\\\[1cm] & =\n1 + \\underbrace{\\Psi\\pars{1/2} - \\Psi\\pars{1} \\over 2}_{\\ds{-\\ln\\pars{2}}}\\\n\\,{1 \\over n} +\n\\underbrace{\\Psi\\,'\\pars{1/2} - \\Psi\\,'\\pars{1} \\over 4}\n_{\\ds{\\pi^{2} \\over 12}}\\,{1 \\over n^{2}} + \\mrm{O}\\pars{1 \\over n^{3}}\n\\end{align}\n$$\n\\bbx{\\ds{%\n\\int_{0}^{1}{\\dd x \\over 1 + x^{n}} \\sim\n1 - {\\ln\\pars{2} \\over n} + {\\pi^{2}/12 \\over n^{2}}\n\\quad\\mbox{as}\\ n \\to \\infty}}\n$$</p>\n\n<blockquote>\n <p>$\\ds{\\Psi\\ \\mbox{and}\\ \\Psi\\,'}$ values for arguments $\\ds{1/2}$ and $1$ are given at the above mentioned link.</p>\n</blockquote>\n"
}
] | Question: Asymptotic expansion of $I_n=\int_{0}^{1}\frac{1}{1+x^n} dx$
Answer 1: <p>The second approach can be made rigorous: For every partial sum we have</p>
<p>$$0 \leqslant \sum_{k = 0}^m (-1)^k x^{kn} \leqslant 1,$$</p>
<p>since $0 \leqslant x^{(k+1)n} \leqslant x^{kn} \leqslant 1$ for $x\in [0,1]$ and all $k\in \mathbb{N}$ if $n > 0$. If you can use it, just mumble "dominated convergence", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\varepsilon]$ for every $\varepsilon > 0$ to justify</p>
<p>$$I_n = \sum_{k = 0}^{\infty} \frac{(-1)^k}{kn+1} = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn+1}.$$</p>
<p>Now we can approximate the $\frac{1}{kn+1}$ with the simpler to handle $\frac{1}{kn}$:</p>
<p>$$I_n = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn} + \sum_{k = 1}^{\infty} (-1)^{k+1}\biggl(\frac{1}{kn} - \frac{1}{kn+1}\biggr).$$</p>
<p>We know that the first series evaluates to $\frac{\ln 2}{n}$, and the remaining sum can be handled similarly:</p>
<p>$$\frac{1}{kn} - \frac{1}{kn+1} = \frac{1}{kn(kn+1)} = \frac{1}{(kn)^2} - \frac{1}{(kn)^2(kn+1)},$$</p>
<p>and</p>
<p>$$\sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{(kn)^2} = \frac{\pi^2}{12n^2}$$</p>
<p>gives the next term in the expansion.</p>
<p>This can be continued as far as one desires, leading to</p>
<p>$$I:n = 1 - \frac{\ln 2}{n} + \sum_{m = 2}^{\infty} (-1)^m\bigl(1-2^{1-m}\bigr)\frac{\zeta(m)}{n^m}.$$</p>
Answer 2: <p>Note that we can write</p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1\frac{1}{1+x^n}\,dx\\\\
&=1-\frac1n\int_0^1\frac{x^{1/n}}{1+x}\,dx\tag 1
\end{align}$$</span></p>
<hr />
<blockquote>
<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class="math-container">$n$</span>.</strong></p>
</blockquote>
<p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(1)$</span> with <span class="math-container">$u=x^{1/n}$</span> and <span class="math-container">$v=\log(1+x)$</span>, we obtain</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\frac1{n^2}\int_0^1 x^{1/n}\frac{\log(1+x)}{x}\,dx \tag2 $$</span></p>
<p>Then, integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span>, we obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=1-\frac{\log(2)}{n}+\frac1{n^2}\underbrace{\int_0^1 \frac{\log(1+x)}{x}\,dx}_{\eta(2)=-\text{Li}_2(-1)=\pi^2/12}-\frac1{n^3}\int_0^1 x^{-1+1/n}\underbrace{\int_0^x \frac{\log(1+x')}{x'}\,dx'}_{\log(1+x) \le -\text{Li}_2(-x)\le x}\,dx\\\\
&=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac{1}{n^3}\right)
\end{align}$$</span></p>
<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class="math-container">$-\text{Li}_\ell(-1)=\eta(\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\sum_{k=2}^\infty \frac{(-1)^{k}\eta(k)}{n^k}$$</span></p>
<p>which agrees with the result reported by @DanielFischer.</p>
<hr />
<blockquote>
<p><strong>PART <span class="math-container">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>
</blockquote>
<p>Note that we have for <span class="math-container">$n\ge 1$</span></p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1 \frac{1}{1+x^n}\,dx\\\\
&\int_0^1 \sum_{k=0}^\infty (-1)^kx^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\int_0^{1-\delta}\sum_{k=0}^\infty (-1)^kx^{nk}\,dx\tag3
\end{align}$$</span></p>
<p>Since the series in <span class="math-container">$(3)$</span> converges uniformly on <span class="math-container">$[0,1-\delta]$</span> for <span class="math-container">$0<\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty (-1)^k\int_0^{1-\delta}x^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty \frac{(-1)^k(1-\delta)^{nk+1}}{1+nk} \tag 4
\end{align}$$</span></p>
<p>For each <span class="math-container">$n$</span>, we have <span class="math-container">$\left|\sum_{k=1}^N (-1)^k\right|\le 1$</span> for all <span class="math-container">$N$</span>, <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class="math-container">$\delta \in [0,1]$</span>, and <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}\to 0$</span> uniformly for <span class="math-container">$\delta\in[0,1]$</span>.</p>
<p>Applying <a href="https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem" rel="nofollow noreferrer">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class="math-container">$(4)$</span> converges uniformly for <span class="math-container">$\delta\in [0,1]$</span>. And inasmuch as <span class="math-container">$\frac{(-1)^k(1-\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class="math-container">$\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>
<p><span class="math-container">$$I_n=\sum_{k=0}^\infty\frac{(-1)^k}{1+nk}$$</span></p>
Answer 3: <p>$I_n
=\int_{0}^{1}\frac{1}{1+x^n} dx
$</p>
<p>Since,
for any integer $m$,
$\frac{1+x^{2m+1}}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k
$,
$\frac{1}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k-\frac{x^{2m+1}}{1+x}
$,
so
$\frac{1}{1+x^n}
=\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}
$.</p>
<p>Integrating,</p>
<p>$\begin{array}\\
I_n
&=\int_{0}^{1}\frac{1}{1+x^n} dx\\
&=\int_{0}^{1}\left(\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}\right) dx\\
&=\sum_{k=0}^{2m} (-1)^k \int_{0}^{1} x^{nk}dx-\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} (-1)^k \frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}
\qquad\text{where } J_{n(2m+1)} =\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} \frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\
\end{array}
$</p>
<p>Since
$1 \le 1+x \le 2$,
$J_{n(2m+1)}
=\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}
=c\int_{0}^{1}x^{n(2m+1)}dx
=\frac{c}{n(2m+1)+1}
$
where
$\frac12 \le c \le 1$.</p>
<p>Therefore the series converges,
so
$I_n
=\sum_{k=0}^{\infty} \frac{(-1)^k}{nk+1}
$.</p>
<p>To get an expression
for this sum,
note that it is
a multisection of
$\sum_{k=1}^{\infty} \frac{(-1)^k}{k}
$.</p>
Answer 4: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>The integral can be expressed in closed form in terms of
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow noreferrer">Digamma Functions $\Psi$</a>.
Namely,</p>
</blockquote>
<p>\begin{align}
I_{n} & \equiv \int_{0}^{1}{\dd x \over 1 + x^{n}} =
\int_{0}^{1}{1 - x^{n} \over 1 - x^{2n}}\,\dd x
\,\,\,\stackrel{x^{2n}\,\,\, \mapsto\ x}{=}\,\,\,
{1 \over 2n}
\int_{0}^{1}{x^{1/\pars{2n} - 1}\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x
\\[5mm] & =
{1 \over 2n}\bracks{\int_{0}^{1}{1\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x -
\int_{0}^{1}{1 - x^{1/\pars{2n} - 1} \over 1 - x}\,\dd x}
\\[5mm] & =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 \over 2n}} =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}} + {1 \over 1/\pars{2n}}}
\\[5mm] & =
1 + {1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}}}
\\[1cm] & =
1 + \underbrace{\Psi\pars{1/2} - \Psi\pars{1} \over 2}_{\ds{-\ln\pars{2}}}\
\,{1 \over n} +
\underbrace{\Psi\,'\pars{1/2} - \Psi\,'\pars{1} \over 4}
_{\ds{\pi^{2} \over 12}}\,{1 \over n^{2}} + \mrm{O}\pars{1 \over n^{3}}
\end{align}
$$
\bbx{\ds{%
\int_{0}^{1}{\dd x \over 1 + x^{n}} \sim
1 - {\ln\pars{2} \over n} + {\pi^{2}/12 \over n^{2}}
\quad\mbox{as}\ n \to \infty}}
$$</p>
<blockquote>
<p>$\ds{\Psi\ \mbox{and}\ \Psi\,'}$ values for arguments $\ds{1/2}$ and $1$ are given at the above mentioned link.</p>
</blockquote>
Answer 5: <p>The second approach can be made rigorous: For every partial sum we have</p>
<p>$$0 \leqslant \sum_{k = 0}^m (-1)^k x^{kn} \leqslant 1,$$</p>
<p>since $0 \leqslant x^{(k+1)n} \leqslant x^{kn} \leqslant 1$ for $x\in [0,1]$ and all $k\in \mathbb{N}$ if $n > 0$. If you can use it, just mumble "dominated convergence", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\varepsilon]$ for every $\varepsilon > 0$ to justify</p>
<p>$$I_n = \sum_{k = 0}^{\infty} \frac{(-1)^k}{kn+1} = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn+1}.$$</p>
<p>Now we can approximate the $\frac{1}{kn+1}$ with the simpler to handle $\frac{1}{kn}$:</p>
<p>$$I_n = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn} + \sum_{k = 1}^{\infty} (-1)^{k+1}\biggl(\frac{1}{kn} - \frac{1}{kn+1}\biggr).$$</p>
<p>We know that the first series evaluates to $\frac{\ln 2}{n}$, and the remaining sum can be handled similarly:</p>
<p>$$\frac{1}{kn} - \frac{1}{kn+1} = \frac{1}{kn(kn+1)} = \frac{1}{(kn)^2} - \frac{1}{(kn)^2(kn+1)},$$</p>
<p>and</p>
<p>$$\sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{(kn)^2} = \frac{\pi^2}{12n^2}$$</p>
<p>gives the next term in the expansion.</p>
<p>This can be continued as far as one desires, leading to</p>
<p>$$I:n = 1 - \frac{\ln 2}{n} + \sum_{m = 2}^{\infty} (-1)^m\bigl(1-2^{1-m}\bigr)\frac{\zeta(m)}{n^m}.$$</p>
Answer 6: <p>Note that we can write</p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1\frac{1}{1+x^n}\,dx\\\\
&=1-\frac1n\int_0^1\frac{x^{1/n}}{1+x}\,dx\tag 1
\end{align}$$</span></p>
<hr />
<blockquote>
<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class="math-container">$n$</span>.</strong></p>
</blockquote>
<p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(1)$</span> with <span class="math-container">$u=x^{1/n}$</span> and <span class="math-container">$v=\log(1+x)$</span>, we obtain</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\frac1{n^2}\int_0^1 x^{1/n}\frac{\log(1+x)}{x}\,dx \tag2 $$</span></p>
<p>Then, integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span>, we obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=1-\frac{\log(2)}{n}+\frac1{n^2}\underbrace{\int_0^1 \frac{\log(1+x)}{x}\,dx}_{\eta(2)=-\text{Li}_2(-1)=\pi^2/12}-\frac1{n^3}\int_0^1 x^{-1+1/n}\underbrace{\int_0^x \frac{\log(1+x')}{x'}\,dx'}_{\log(1+x) \le -\text{Li}_2(-x)\le x}\,dx\\\\
&=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac{1}{n^3}\right)
\end{align}$$</span></p>
<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class="math-container">$-\text{Li}_\ell(-1)=\eta(\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\sum_{k=2}^\infty \frac{(-1)^{k}\eta(k)}{n^k}$$</span></p>
<p>which agrees with the result reported by @DanielFischer.</p>
<hr />
<blockquote>
<p><strong>PART <span class="math-container">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>
</blockquote>
<p>Note that we have for <span class="math-container">$n\ge 1$</span></p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1 \frac{1}{1+x^n}\,dx\\\\
&\int_0^1 \sum_{k=0}^\infty (-1)^kx^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\int_0^{1-\delta}\sum_{k=0}^\infty (-1)^kx^{nk}\,dx\tag3
\end{align}$$</span></p>
<p>Since the series in <span class="math-container">$(3)$</span> converges uniformly on <span class="math-container">$[0,1-\delta]$</span> for <span class="math-container">$0<\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty (-1)^k\int_0^{1-\delta}x^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty \frac{(-1)^k(1-\delta)^{nk+1}}{1+nk} \tag 4
\end{align}$$</span></p>
<p>For each <span class="math-container">$n$</span>, we have <span class="math-container">$\left|\sum_{k=1}^N (-1)^k\right|\le 1$</span> for all <span class="math-container">$N$</span>, <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class="math-container">$\delta \in [0,1]$</span>, and <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}\to 0$</span> uniformly for <span class="math-container">$\delta\in[0,1]$</span>.</p>
<p>Applying <a href="https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem" rel="nofollow noreferrer">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class="math-container">$(4)$</span> converges uniformly for <span class="math-container">$\delta\in [0,1]$</span>. And inasmuch as <span class="math-container">$\frac{(-1)^k(1-\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class="math-container">$\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>
<p><span class="math-container">$$I_n=\sum_{k=0}^\infty\frac{(-1)^k}{1+nk}$$</span></p>
Answer 7: <p>$I_n
=\int_{0}^{1}\frac{1}{1+x^n} dx
$</p>
<p>Since,
for any integer $m$,
$\frac{1+x^{2m+1}}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k
$,
$\frac{1}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k-\frac{x^{2m+1}}{1+x}
$,
so
$\frac{1}{1+x^n}
=\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}
$.</p>
<p>Integrating,</p>
<p>$\begin{array}\\
I_n
&=\int_{0}^{1}\frac{1}{1+x^n} dx\\
&=\int_{0}^{1}\left(\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}\right) dx\\
&=\sum_{k=0}^{2m} (-1)^k \int_{0}^{1} x^{nk}dx-\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} (-1)^k \frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}
\qquad\text{where } J_{n(2m+1)} =\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} \frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\
\end{array}
$</p>
<p>Since
$1 \le 1+x \le 2$,
$J_{n(2m+1)}
=\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}
=c\int_{0}^{1}x^{n(2m+1)}dx
=\frac{c}{n(2m+1)+1}
$
where
$\frac12 \le c \le 1$.</p>
<p>Therefore the series converges,
so
$I_n
=\sum_{k=0}^{\infty} \frac{(-1)^k}{nk+1}
$.</p>
<p>To get an expression
for this sum,
note that it is
a multisection of
$\sum_{k=1}^{\infty} \frac{(-1)^k}{k}
$.</p>
Answer 8: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>The integral can be expressed in closed form in terms of
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow noreferrer">Digamma Functions $\Psi$</a>.
Namely,</p>
</blockquote>
<p>\begin{align}
I_{n} & \equiv \int_{0}^{1}{\dd x \over 1 + x^{n}} =
\int_{0}^{1}{1 - x^{n} \over 1 - x^{2n}}\,\dd x
\,\,\,\stackrel{x^{2n}\,\,\, \mapsto\ x}{=}\,\,\,
{1 \over 2n}
\int_{0}^{1}{x^{1/\pars{2n} - 1}\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x
\\[5mm] & =
{1 \over 2n}\bracks{\int_{0}^{1}{1\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x -
\int_{0}^{1}{1 - x^{1/\pars{2n} - 1} \over 1 - x}\,\dd x}
\\[5mm] & =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 \over 2n}} =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}} + {1 \over 1/\pars{2n}}}
\\[5mm] & =
1 + {1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}}}
\\[1cm] & =
1 + \underbrace{\Psi\pars{1/2} - \Psi\pars{1} \over 2}_{\ds{-\ln\pars{2}}}\
\,{1 \over n} +
\underbrace{\Psi\,'\pars{1/2} - \Psi\,'\pars{1} \over 4}
_{\ds{\pi^{2} \over 12}}\,{1 \over n^{2}} + \mrm{O}\pars{1 \over n^{3}}
\end{align}
$$
\bbx{\ds{%
\int_{0}^{1}{\dd x \over 1 + x^{n}} \sim
1 - {\ln\pars{2} \over n} + {\pi^{2}/12 \over n^{2}}
\quad\mbox{as}\ n \to \infty}}
$$</p>
<blockquote>
<p>$\ds{\Psi\ \mbox{and}\ \Psi\,'}$ values for arguments $\ds{1/2}$ and $1$ are given at the above mentioned link.</p>
</blockquote>
Answer 9: <p>The second approach can be made rigorous: For every partial sum we have</p>
<p>$$0 \leqslant \sum_{k = 0}^m (-1)^k x^{kn} \leqslant 1,$$</p>
<p>since $0 \leqslant x^{(k+1)n} \leqslant x^{kn} \leqslant 1$ for $x\in [0,1]$ and all $k\in \mathbb{N}$ if $n > 0$. If you can use it, just mumble "dominated convergence", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\varepsilon]$ for every $\varepsilon > 0$ to justify</p>
<p>$$I_n = \sum_{k = 0}^{\infty} \frac{(-1)^k}{kn+1} = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn+1}.$$</p>
<p>Now we can approximate the $\frac{1}{kn+1}$ with the simpler to handle $\frac{1}{kn}$:</p>
<p>$$I_n = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn} + \sum_{k = 1}^{\infty} (-1)^{k+1}\biggl(\frac{1}{kn} - \frac{1}{kn+1}\biggr).$$</p>
<p>We know that the first series evaluates to $\frac{\ln 2}{n}$, and the remaining sum can be handled similarly:</p>
<p>$$\frac{1}{kn} - \frac{1}{kn+1} = \frac{1}{kn(kn+1)} = \frac{1}{(kn)^2} - \frac{1}{(kn)^2(kn+1)},$$</p>
<p>and</p>
<p>$$\sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{(kn)^2} = \frac{\pi^2}{12n^2}$$</p>
<p>gives the next term in the expansion.</p>
<p>This can be continued as far as one desires, leading to</p>
<p>$$I:n = 1 - \frac{\ln 2}{n} + \sum_{m = 2}^{\infty} (-1)^m\bigl(1-2^{1-m}\bigr)\frac{\zeta(m)}{n^m}.$$</p>
Answer 10: <p>Note that we can write</p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1\frac{1}{1+x^n}\,dx\\\\
&=1-\frac1n\int_0^1\frac{x^{1/n}}{1+x}\,dx\tag 1
\end{align}$$</span></p>
<hr />
<blockquote>
<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class="math-container">$n$</span>.</strong></p>
</blockquote>
<p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(1)$</span> with <span class="math-container">$u=x^{1/n}$</span> and <span class="math-container">$v=\log(1+x)$</span>, we obtain</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\frac1{n^2}\int_0^1 x^{1/n}\frac{\log(1+x)}{x}\,dx \tag2 $$</span></p>
<p>Then, integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span>, we obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=1-\frac{\log(2)}{n}+\frac1{n^2}\underbrace{\int_0^1 \frac{\log(1+x)}{x}\,dx}_{\eta(2)=-\text{Li}_2(-1)=\pi^2/12}-\frac1{n^3}\int_0^1 x^{-1+1/n}\underbrace{\int_0^x \frac{\log(1+x')}{x'}\,dx'}_{\log(1+x) \le -\text{Li}_2(-x)\le x}\,dx\\\\
&=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac{1}{n^3}\right)
\end{align}$$</span></p>
<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class="math-container">$-\text{Li}_\ell(-1)=\eta(\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\sum_{k=2}^\infty \frac{(-1)^{k}\eta(k)}{n^k}$$</span></p>
<p>which agrees with the result reported by @DanielFischer.</p>
<hr />
<blockquote>
<p><strong>PART <span class="math-container">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>
</blockquote>
<p>Note that we have for <span class="math-container">$n\ge 1$</span></p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1 \frac{1}{1+x^n}\,dx\\\\
&\int_0^1 \sum_{k=0}^\infty (-1)^kx^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\int_0^{1-\delta}\sum_{k=0}^\infty (-1)^kx^{nk}\,dx\tag3
\end{align}$$</span></p>
<p>Since the series in <span class="math-container">$(3)$</span> converges uniformly on <span class="math-container">$[0,1-\delta]$</span> for <span class="math-container">$0<\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty (-1)^k\int_0^{1-\delta}x^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty \frac{(-1)^k(1-\delta)^{nk+1}}{1+nk} \tag 4
\end{align}$$</span></p>
<p>For each <span class="math-container">$n$</span>, we have <span class="math-container">$\left|\sum_{k=1}^N (-1)^k\right|\le 1$</span> for all <span class="math-container">$N$</span>, <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class="math-container">$\delta \in [0,1]$</span>, and <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}\to 0$</span> uniformly for <span class="math-container">$\delta\in[0,1]$</span>.</p>
<p>Applying <a href="https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem" rel="nofollow noreferrer">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class="math-container">$(4)$</span> converges uniformly for <span class="math-container">$\delta\in [0,1]$</span>. And inasmuch as <span class="math-container">$\frac{(-1)^k(1-\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class="math-container">$\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>
<p><span class="math-container">$$I_n=\sum_{k=0}^\infty\frac{(-1)^k}{1+nk}$$</span></p>
Answer 11: <p>$I_n
=\int_{0}^{1}\frac{1}{1+x^n} dx
$</p>
<p>Since,
for any integer $m$,
$\frac{1+x^{2m+1}}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k
$,
$\frac{1}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k-\frac{x^{2m+1}}{1+x}
$,
so
$\frac{1}{1+x^n}
=\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}
$.</p>
<p>Integrating,</p>
<p>$\begin{array}\\
I_n
&=\int_{0}^{1}\frac{1}{1+x^n} dx\\
&=\int_{0}^{1}\left(\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}\right) dx\\
&=\sum_{k=0}^{2m} (-1)^k \int_{0}^{1} x^{nk}dx-\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} (-1)^k \frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}
\qquad\text{where } J_{n(2m+1)} =\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} \frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\
\end{array}
$</p>
<p>Since
$1 \le 1+x \le 2$,
$J_{n(2m+1)}
=\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}
=c\int_{0}^{1}x^{n(2m+1)}dx
=\frac{c}{n(2m+1)+1}
$
where
$\frac12 \le c \le 1$.</p>
<p>Therefore the series converges,
so
$I_n
=\sum_{k=0}^{\infty} \frac{(-1)^k}{nk+1}
$.</p>
<p>To get an expression
for this sum,
note that it is
a multisection of
$\sum_{k=1}^{\infty} \frac{(-1)^k}{k}
$.</p>
Answer 12: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>The integral can be expressed in closed form in terms of
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow noreferrer">Digamma Functions $\Psi$</a>.
Namely,</p>
</blockquote>
<p>\begin{align}
I_{n} & \equiv \int_{0}^{1}{\dd x \over 1 + x^{n}} =
\int_{0}^{1}{1 - x^{n} \over 1 - x^{2n}}\,\dd x
\,\,\,\stackrel{x^{2n}\,\,\, \mapsto\ x}{=}\,\,\,
{1 \over 2n}
\int_{0}^{1}{x^{1/\pars{2n} - 1}\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x
\\[5mm] & =
{1 \over 2n}\bracks{\int_{0}^{1}{1\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x -
\int_{0}^{1}{1 - x^{1/\pars{2n} - 1} \over 1 - x}\,\dd x}
\\[5mm] & =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 \over 2n}} =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}} + {1 \over 1/\pars{2n}}}
\\[5mm] & =
1 + {1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}}}
\\[1cm] & =
1 + \underbrace{\Psi\pars{1/2} - \Psi\pars{1} \over 2}_{\ds{-\ln\pars{2}}}\
\,{1 \over n} +
\underbrace{\Psi\,'\pars{1/2} - \Psi\,'\pars{1} \over 4}
_{\ds{\pi^{2} \over 12}}\,{1 \over n^{2}} + \mrm{O}\pars{1 \over n^{3}}
\end{align}
$$
\bbx{\ds{%
\int_{0}^{1}{\dd x \over 1 + x^{n}} \sim
1 - {\ln\pars{2} \over n} + {\pi^{2}/12 \over n^{2}}
\quad\mbox{as}\ n \to \infty}}
$$</p>
<blockquote>
<p>$\ds{\Psi\ \mbox{and}\ \Psi\,'}$ values for arguments $\ds{1/2}$ and $1$ are given at the above mentioned link.</p>
</blockquote>
Answer 13: <p>The second approach can be made rigorous: For every partial sum we have</p>
<p>$$0 \leqslant \sum_{k = 0}^m (-1)^k x^{kn} \leqslant 1,$$</p>
<p>since $0 \leqslant x^{(k+1)n} \leqslant x^{kn} \leqslant 1$ for $x\in [0,1]$ and all $k\in \mathbb{N}$ if $n > 0$. If you can use it, just mumble "dominated convergence", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\varepsilon]$ for every $\varepsilon > 0$ to justify</p>
<p>$$I_n = \sum_{k = 0}^{\infty} \frac{(-1)^k}{kn+1} = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn+1}.$$</p>
<p>Now we can approximate the $\frac{1}{kn+1}$ with the simpler to handle $\frac{1}{kn}$:</p>
<p>$$I_n = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn} + \sum_{k = 1}^{\infty} (-1)^{k+1}\biggl(\frac{1}{kn} - \frac{1}{kn+1}\biggr).$$</p>
<p>We know that the first series evaluates to $\frac{\ln 2}{n}$, and the remaining sum can be handled similarly:</p>
<p>$$\frac{1}{kn} - \frac{1}{kn+1} = \frac{1}{kn(kn+1)} = \frac{1}{(kn)^2} - \frac{1}{(kn)^2(kn+1)},$$</p>
<p>and</p>
<p>$$\sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{(kn)^2} = \frac{\pi^2}{12n^2}$$</p>
<p>gives the next term in the expansion.</p>
<p>This can be continued as far as one desires, leading to</p>
<p>$$I:n = 1 - \frac{\ln 2}{n} + \sum_{m = 2}^{\infty} (-1)^m\bigl(1-2^{1-m}\bigr)\frac{\zeta(m)}{n^m}.$$</p>
Answer 14: <p>Note that we can write</p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1\frac{1}{1+x^n}\,dx\\\\
&=1-\frac1n\int_0^1\frac{x^{1/n}}{1+x}\,dx\tag 1
\end{align}$$</span></p>
<hr />
<blockquote>
<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class="math-container">$n$</span>.</strong></p>
</blockquote>
<p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(1)$</span> with <span class="math-container">$u=x^{1/n}$</span> and <span class="math-container">$v=\log(1+x)$</span>, we obtain</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\frac1{n^2}\int_0^1 x^{1/n}\frac{\log(1+x)}{x}\,dx \tag2 $$</span></p>
<p>Then, integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span>, we obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=1-\frac{\log(2)}{n}+\frac1{n^2}\underbrace{\int_0^1 \frac{\log(1+x)}{x}\,dx}_{\eta(2)=-\text{Li}_2(-1)=\pi^2/12}-\frac1{n^3}\int_0^1 x^{-1+1/n}\underbrace{\int_0^x \frac{\log(1+x')}{x'}\,dx'}_{\log(1+x) \le -\text{Li}_2(-x)\le x}\,dx\\\\
&=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac{1}{n^3}\right)
\end{align}$$</span></p>
<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class="math-container">$-\text{Li}_\ell(-1)=\eta(\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\sum_{k=2}^\infty \frac{(-1)^{k}\eta(k)}{n^k}$$</span></p>
<p>which agrees with the result reported by @DanielFischer.</p>
<hr />
<blockquote>
<p><strong>PART <span class="math-container">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>
</blockquote>
<p>Note that we have for <span class="math-container">$n\ge 1$</span></p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1 \frac{1}{1+x^n}\,dx\\\\
&\int_0^1 \sum_{k=0}^\infty (-1)^kx^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\int_0^{1-\delta}\sum_{k=0}^\infty (-1)^kx^{nk}\,dx\tag3
\end{align}$$</span></p>
<p>Since the series in <span class="math-container">$(3)$</span> converges uniformly on <span class="math-container">$[0,1-\delta]$</span> for <span class="math-container">$0<\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty (-1)^k\int_0^{1-\delta}x^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty \frac{(-1)^k(1-\delta)^{nk+1}}{1+nk} \tag 4
\end{align}$$</span></p>
<p>For each <span class="math-container">$n$</span>, we have <span class="math-container">$\left|\sum_{k=1}^N (-1)^k\right|\le 1$</span> for all <span class="math-container">$N$</span>, <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class="math-container">$\delta \in [0,1]$</span>, and <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}\to 0$</span> uniformly for <span class="math-container">$\delta\in[0,1]$</span>.</p>
<p>Applying <a href="https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem" rel="nofollow noreferrer">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class="math-container">$(4)$</span> converges uniformly for <span class="math-container">$\delta\in [0,1]$</span>. And inasmuch as <span class="math-container">$\frac{(-1)^k(1-\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class="math-container">$\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>
<p><span class="math-container">$$I_n=\sum_{k=0}^\infty\frac{(-1)^k}{1+nk}$$</span></p>
Answer 15: <p>$I_n
=\int_{0}^{1}\frac{1}{1+x^n} dx
$</p>
<p>Since,
for any integer $m$,
$\frac{1+x^{2m+1}}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k
$,
$\frac{1}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k-\frac{x^{2m+1}}{1+x}
$,
so
$\frac{1}{1+x^n}
=\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}
$.</p>
<p>Integrating,</p>
<p>$\begin{array}\\
I_n
&=\int_{0}^{1}\frac{1}{1+x^n} dx\\
&=\int_{0}^{1}\left(\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}\right) dx\\
&=\sum_{k=0}^{2m} (-1)^k \int_{0}^{1} x^{nk}dx-\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} (-1)^k \frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}
\qquad\text{where } J_{n(2m+1)} =\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} \frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\
\end{array}
$</p>
<p>Since
$1 \le 1+x \le 2$,
$J_{n(2m+1)}
=\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}
=c\int_{0}^{1}x^{n(2m+1)}dx
=\frac{c}{n(2m+1)+1}
$
where
$\frac12 \le c \le 1$.</p>
<p>Therefore the series converges,
so
$I_n
=\sum_{k=0}^{\infty} \frac{(-1)^k}{nk+1}
$.</p>
<p>To get an expression
for this sum,
note that it is
a multisection of
$\sum_{k=1}^{\infty} \frac{(-1)^k}{k}
$.</p>
Answer 16: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>The integral can be expressed in closed form in terms of
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow noreferrer">Digamma Functions $\Psi$</a>.
Namely,</p>
</blockquote>
<p>\begin{align}
I_{n} & \equiv \int_{0}^{1}{\dd x \over 1 + x^{n}} =
\int_{0}^{1}{1 - x^{n} \over 1 - x^{2n}}\,\dd x
\,\,\,\stackrel{x^{2n}\,\,\, \mapsto\ x}{=}\,\,\,
{1 \over 2n}
\int_{0}^{1}{x^{1/\pars{2n} - 1}\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x
\\[5mm] & =
{1 \over 2n}\bracks{\int_{0}^{1}{1\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x -
\int_{0}^{1}{1 - x^{1/\pars{2n} - 1} \over 1 - x}\,\dd x}
\\[5mm] & =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 \over 2n}} =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}} + {1 \over 1/\pars{2n}}}
\\[5mm] & =
1 + {1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}}}
\\[1cm] & =
1 + \underbrace{\Psi\pars{1/2} - \Psi\pars{1} \over 2}_{\ds{-\ln\pars{2}}}\
\,{1 \over n} +
\underbrace{\Psi\,'\pars{1/2} - \Psi\,'\pars{1} \over 4}
_{\ds{\pi^{2} \over 12}}\,{1 \over n^{2}} + \mrm{O}\pars{1 \over n^{3}}
\end{align}
$$
\bbx{\ds{%
\int_{0}^{1}{\dd x \over 1 + x^{n}} \sim
1 - {\ln\pars{2} \over n} + {\pi^{2}/12 \over n^{2}}
\quad\mbox{as}\ n \to \infty}}
$$</p>
<blockquote>
<p>$\ds{\Psi\ \mbox{and}\ \Psi\,'}$ values for arguments $\ds{1/2}$ and $1$ are given at the above mentioned link.</p>
</blockquote>
Answer 17: <p>The second approach can be made rigorous: For every partial sum we have</p>
<p>$$0 \leqslant \sum_{k = 0}^m (-1)^k x^{kn} \leqslant 1,$$</p>
<p>since $0 \leqslant x^{(k+1)n} \leqslant x^{kn} \leqslant 1$ for $x\in [0,1]$ and all $k\in \mathbb{N}$ if $n > 0$. If you can use it, just mumble "dominated convergence", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\varepsilon]$ for every $\varepsilon > 0$ to justify</p>
<p>$$I_n = \sum_{k = 0}^{\infty} \frac{(-1)^k}{kn+1} = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn+1}.$$</p>
<p>Now we can approximate the $\frac{1}{kn+1}$ with the simpler to handle $\frac{1}{kn}$:</p>
<p>$$I_n = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn} + \sum_{k = 1}^{\infty} (-1)^{k+1}\biggl(\frac{1}{kn} - \frac{1}{kn+1}\biggr).$$</p>
<p>We know that the first series evaluates to $\frac{\ln 2}{n}$, and the remaining sum can be handled similarly:</p>
<p>$$\frac{1}{kn} - \frac{1}{kn+1} = \frac{1}{kn(kn+1)} = \frac{1}{(kn)^2} - \frac{1}{(kn)^2(kn+1)},$$</p>
<p>and</p>
<p>$$\sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{(kn)^2} = \frac{\pi^2}{12n^2}$$</p>
<p>gives the next term in the expansion.</p>
<p>This can be continued as far as one desires, leading to</p>
<p>$$I:n = 1 - \frac{\ln 2}{n} + \sum_{m = 2}^{\infty} (-1)^m\bigl(1-2^{1-m}\bigr)\frac{\zeta(m)}{n^m}.$$</p>
Answer 18: <p>Note that we can write</p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1\frac{1}{1+x^n}\,dx\\\\
&=1-\frac1n\int_0^1\frac{x^{1/n}}{1+x}\,dx\tag 1
\end{align}$$</span></p>
<hr />
<blockquote>
<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class="math-container">$n$</span>.</strong></p>
</blockquote>
<p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(1)$</span> with <span class="math-container">$u=x^{1/n}$</span> and <span class="math-container">$v=\log(1+x)$</span>, we obtain</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\frac1{n^2}\int_0^1 x^{1/n}\frac{\log(1+x)}{x}\,dx \tag2 $$</span></p>
<p>Then, integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span>, we obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=1-\frac{\log(2)}{n}+\frac1{n^2}\underbrace{\int_0^1 \frac{\log(1+x)}{x}\,dx}_{\eta(2)=-\text{Li}_2(-1)=\pi^2/12}-\frac1{n^3}\int_0^1 x^{-1+1/n}\underbrace{\int_0^x \frac{\log(1+x')}{x'}\,dx'}_{\log(1+x) \le -\text{Li}_2(-x)\le x}\,dx\\\\
&=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac{1}{n^3}\right)
\end{align}$$</span></p>
<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class="math-container">$-\text{Li}_\ell(-1)=\eta(\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\sum_{k=2}^\infty \frac{(-1)^{k}\eta(k)}{n^k}$$</span></p>
<p>which agrees with the result reported by @DanielFischer.</p>
<hr />
<blockquote>
<p><strong>PART <span class="math-container">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>
</blockquote>
<p>Note that we have for <span class="math-container">$n\ge 1$</span></p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1 \frac{1}{1+x^n}\,dx\\\\
&\int_0^1 \sum_{k=0}^\infty (-1)^kx^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\int_0^{1-\delta}\sum_{k=0}^\infty (-1)^kx^{nk}\,dx\tag3
\end{align}$$</span></p>
<p>Since the series in <span class="math-container">$(3)$</span> converges uniformly on <span class="math-container">$[0,1-\delta]$</span> for <span class="math-container">$0<\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty (-1)^k\int_0^{1-\delta}x^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty \frac{(-1)^k(1-\delta)^{nk+1}}{1+nk} \tag 4
\end{align}$$</span></p>
<p>For each <span class="math-container">$n$</span>, we have <span class="math-container">$\left|\sum_{k=1}^N (-1)^k\right|\le 1$</span> for all <span class="math-container">$N$</span>, <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class="math-container">$\delta \in [0,1]$</span>, and <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}\to 0$</span> uniformly for <span class="math-container">$\delta\in[0,1]$</span>.</p>
<p>Applying <a href="https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem" rel="nofollow noreferrer">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class="math-container">$(4)$</span> converges uniformly for <span class="math-container">$\delta\in [0,1]$</span>. And inasmuch as <span class="math-container">$\frac{(-1)^k(1-\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class="math-container">$\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>
<p><span class="math-container">$$I_n=\sum_{k=0}^\infty\frac{(-1)^k}{1+nk}$$</span></p>
Answer 19: <p>$I_n
=\int_{0}^{1}\frac{1}{1+x^n} dx
$</p>
<p>Since,
for any integer $m$,
$\frac{1+x^{2m+1}}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k
$,
$\frac{1}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k-\frac{x^{2m+1}}{1+x}
$,
so
$\frac{1}{1+x^n}
=\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}
$.</p>
<p>Integrating,</p>
<p>$\begin{array}\\
I_n
&=\int_{0}^{1}\frac{1}{1+x^n} dx\\
&=\int_{0}^{1}\left(\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}\right) dx\\
&=\sum_{k=0}^{2m} (-1)^k \int_{0}^{1} x^{nk}dx-\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} (-1)^k \frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}
\qquad\text{where } J_{n(2m+1)} =\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} \frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\
\end{array}
$</p>
<p>Since
$1 \le 1+x \le 2$,
$J_{n(2m+1)}
=\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}
=c\int_{0}^{1}x^{n(2m+1)}dx
=\frac{c}{n(2m+1)+1}
$
where
$\frac12 \le c \le 1$.</p>
<p>Therefore the series converges,
so
$I_n
=\sum_{k=0}^{\infty} \frac{(-1)^k}{nk+1}
$.</p>
<p>To get an expression
for this sum,
note that it is
a multisection of
$\sum_{k=1}^{\infty} \frac{(-1)^k}{k}
$.</p>
Answer 20: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>The integral can be expressed in closed form in terms of
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow noreferrer">Digamma Functions $\Psi$</a>.
Namely,</p>
</blockquote>
<p>\begin{align}
I_{n} & \equiv \int_{0}^{1}{\dd x \over 1 + x^{n}} =
\int_{0}^{1}{1 - x^{n} \over 1 - x^{2n}}\,\dd x
\,\,\,\stackrel{x^{2n}\,\,\, \mapsto\ x}{=}\,\,\,
{1 \over 2n}
\int_{0}^{1}{x^{1/\pars{2n} - 1}\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x
\\[5mm] & =
{1 \over 2n}\bracks{\int_{0}^{1}{1\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x -
\int_{0}^{1}{1 - x^{1/\pars{2n} - 1} \over 1 - x}\,\dd x}
\\[5mm] & =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 \over 2n}} =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}} + {1 \over 1/\pars{2n}}}
\\[5mm] & =
1 + {1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}}}
\\[1cm] & =
1 + \underbrace{\Psi\pars{1/2} - \Psi\pars{1} \over 2}_{\ds{-\ln\pars{2}}}\
\,{1 \over n} +
\underbrace{\Psi\,'\pars{1/2} - \Psi\,'\pars{1} \over 4}
_{\ds{\pi^{2} \over 12}}\,{1 \over n^{2}} + \mrm{O}\pars{1 \over n^{3}}
\end{align}
$$
\bbx{\ds{%
\int_{0}^{1}{\dd x \over 1 + x^{n}} \sim
1 - {\ln\pars{2} \over n} + {\pi^{2}/12 \over n^{2}}
\quad\mbox{as}\ n \to \infty}}
$$</p>
<blockquote>
<p>$\ds{\Psi\ \mbox{and}\ \Psi\,'}$ values for arguments $\ds{1/2}$ and $1$ are given at the above mentioned link.</p>
</blockquote>
Answer 21: <p>The second approach can be made rigorous: For every partial sum we have</p>
<p>$$0 \leqslant \sum_{k = 0}^m (-1)^k x^{kn} \leqslant 1,$$</p>
<p>since $0 \leqslant x^{(k+1)n} \leqslant x^{kn} \leqslant 1$ for $x\in [0,1]$ and all $k\in \mathbb{N}$ if $n > 0$. If you can use it, just mumble "dominated convergence", and if you're dealing with Riemann integrals, combine the global boundedness with the uniform convergence on $[0,1-\varepsilon]$ for every $\varepsilon > 0$ to justify</p>
<p>$$I_n = \sum_{k = 0}^{\infty} \frac{(-1)^k}{kn+1} = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn+1}.$$</p>
<p>Now we can approximate the $\frac{1}{kn+1}$ with the simpler to handle $\frac{1}{kn}$:</p>
<p>$$I_n = 1 - \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{kn} + \sum_{k = 1}^{\infty} (-1)^{k+1}\biggl(\frac{1}{kn} - \frac{1}{kn+1}\biggr).$$</p>
<p>We know that the first series evaluates to $\frac{\ln 2}{n}$, and the remaining sum can be handled similarly:</p>
<p>$$\frac{1}{kn} - \frac{1}{kn+1} = \frac{1}{kn(kn+1)} = \frac{1}{(kn)^2} - \frac{1}{(kn)^2(kn+1)},$$</p>
<p>and</p>
<p>$$\sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{(kn)^2} = \frac{\pi^2}{12n^2}$$</p>
<p>gives the next term in the expansion.</p>
<p>This can be continued as far as one desires, leading to</p>
<p>$$I:n = 1 - \frac{\ln 2}{n} + \sum_{m = 2}^{\infty} (-1)^m\bigl(1-2^{1-m}\bigr)\frac{\zeta(m)}{n^m}.$$</p>
Answer 22: <p>Note that we can write</p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1\frac{1}{1+x^n}\,dx\\\\
&=1-\frac1n\int_0^1\frac{x^{1/n}}{1+x}\,dx\tag 1
\end{align}$$</span></p>
<hr />
<blockquote>
<p><strong>The first methodology in the OP is quite a tenable way forward. Here, we use successive integration by parts to obtain the full asymptotic series in reciprocal powers of <span class="math-container">$n$</span>.</strong></p>
</blockquote>
<p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(1)$</span> with <span class="math-container">$u=x^{1/n}$</span> and <span class="math-container">$v=\log(1+x)$</span>, we obtain</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\frac1{n^2}\int_0^1 x^{1/n}\frac{\log(1+x)}{x}\,dx \tag2 $$</span></p>
<p>Then, integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span>, we obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=1-\frac{\log(2)}{n}+\frac1{n^2}\underbrace{\int_0^1 \frac{\log(1+x)}{x}\,dx}_{\eta(2)=-\text{Li}_2(-1)=\pi^2/12}-\frac1{n^3}\int_0^1 x^{-1+1/n}\underbrace{\int_0^x \frac{\log(1+x')}{x'}\,dx'}_{\log(1+x) \le -\text{Li}_2(-x)\le x}\,dx\\\\
&=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac{1}{n^3}\right)
\end{align}$$</span></p>
<p>One can continue to integrate by parts by introducing higher order polylogarithm functions <span class="math-container">$-\text{Li}_\ell(-1)=\eta(\ell)$</span>, in terms of the Dirichlet Eta function. The full result is</p>
<p><span class="math-container">$$I_n=1-\frac{\log(2)}{n}+\sum_{k=2}^\infty \frac{(-1)^{k}\eta(k)}{n^k}$$</span></p>
<p>which agrees with the result reported by @DanielFischer.</p>
<hr />
<blockquote>
<p><strong>PART <span class="math-container">$2$</span>: I thought it would be a useful complement to Daniel Fischer's post to present the details regarding the interchange of operations. To that end, we proceed.</strong></p>
</blockquote>
<p>Note that we have for <span class="math-container">$n\ge 1$</span></p>
<p><span class="math-container">$$\begin{align}
I_n&=\int_0^1 \frac{1}{1+x^n}\,dx\\\\
&\int_0^1 \sum_{k=0}^\infty (-1)^kx^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\int_0^{1-\delta}\sum_{k=0}^\infty (-1)^kx^{nk}\,dx\tag3
\end{align}$$</span></p>
<p>Since the series in <span class="math-container">$(3)$</span> converges uniformly on <span class="math-container">$[0,1-\delta]$</span> for <span class="math-container">$0<\delta<1]$</span>, we can interchange the order of integration with the series to obtain</p>
<p><span class="math-container">$$\begin{align}
I_n&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty (-1)^k\int_0^{1-\delta}x^{nk}\,dx\\\\
&=\lim_{\delta\to 0^+}\sum_{k=0}^\infty \frac{(-1)^k(1-\delta)^{nk+1}}{1+nk} \tag 4
\end{align}$$</span></p>
<p>For each <span class="math-container">$n$</span>, we have <span class="math-container">$\left|\sum_{k=1}^N (-1)^k\right|\le 1$</span> for all <span class="math-container">$N$</span>, <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}$</span> is monotonic for each <span class="math-container">$\delta \in [0,1]$</span>, and <span class="math-container">$\frac{(1-\delta)^{1+nk}}{1+nk}\to 0$</span> uniformly for <span class="math-container">$\delta\in[0,1]$</span>.</p>
<p>Applying <a href="https://proofwiki.org/wiki/Dirichlet%27s_Test_for_Uniform_Convergence#Theorem" rel="nofollow noreferrer">Dirichlet's Test for Uniform Convergence</a>, we find that the series in <span class="math-container">$(4)$</span> converges uniformly for <span class="math-container">$\delta\in [0,1]$</span>. And inasmuch as <span class="math-container">$\frac{(-1)^k(1-\delta)^{1+nk}}{1+nk}$</span> is a continuous function of <span class="math-container">$\delta$</span>, then the series is also continuous and we may interchange the limit and the series to arrive at the coveted result</p>
<p><span class="math-container">$$I_n=\sum_{k=0}^\infty\frac{(-1)^k}{1+nk}$$</span></p>
Answer 23: <p>$I_n
=\int_{0}^{1}\frac{1}{1+x^n} dx
$</p>
<p>Since,
for any integer $m$,
$\frac{1+x^{2m+1}}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k
$,
$\frac{1}{1+x}
=\sum_{k=0}^{2m} (-1)^k x^k-\frac{x^{2m+1}}{1+x}
$,
so
$\frac{1}{1+x^n}
=\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}
$.</p>
<p>Integrating,</p>
<p>$\begin{array}\\
I_n
&=\int_{0}^{1}\frac{1}{1+x^n} dx\\
&=\int_{0}^{1}\left(\sum_{k=0}^{2m} (-1)^k x^{nk}-\frac{x^{n(2m+1)}}{1+x}\right) dx\\
&=\sum_{k=0}^{2m} (-1)^k \int_{0}^{1} x^{nk}dx-\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} (-1)^k \frac{x^{nk+1}}{nk+1}|_{0}^{1}-J_{n(2m+1)}
\qquad\text{where } J_{n(2m+1)} =\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}\\
&=\sum_{k=0}^{2m} \frac{(-1)^k}{nk+1}-J_{n(2m+1)}\\
\end{array}
$</p>
<p>Since
$1 \le 1+x \le 2$,
$J_{n(2m+1)}
=\int_{0}^{1}\frac{x^{n(2m+1)}dx}{1+x}
=c\int_{0}^{1}x^{n(2m+1)}dx
=\frac{c}{n(2m+1)+1}
$
where
$\frac12 \le c \le 1$.</p>
<p>Therefore the series converges,
so
$I_n
=\sum_{k=0}^{\infty} \frac{(-1)^k}{nk+1}
$.</p>
<p>To get an expression
for this sum,
note that it is
a multisection of
$\sum_{k=1}^{\infty} \frac{(-1)^k}{k}
$.</p>
Answer 24: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>The integral can be expressed in closed form in terms of
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow noreferrer">Digamma Functions $\Psi$</a>.
Namely,</p>
</blockquote>
<p>\begin{align}
I_{n} & \equiv \int_{0}^{1}{\dd x \over 1 + x^{n}} =
\int_{0}^{1}{1 - x^{n} \over 1 - x^{2n}}\,\dd x
\,\,\,\stackrel{x^{2n}\,\,\, \mapsto\ x}{=}\,\,\,
{1 \over 2n}
\int_{0}^{1}{x^{1/\pars{2n} - 1}\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x
\\[5mm] & =
{1 \over 2n}\bracks{\int_{0}^{1}{1\ -\ x^{1/\pars{2n} - 1/2} \over 1 - x}\,\dd x -
\int_{0}^{1}{1 - x^{1/\pars{2n} - 1} \over 1 - x}\,\dd x}
\\[5mm] & =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 \over 2n}} =
{1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}} + {1 \over 1/\pars{2n}}}
\\[5mm] & =
1 + {1 \over 2n}\bracks{\Psi\pars{{1 \over 2} + {1 \over 2n}} -
\Psi\pars{1 + {1 \over 2n}}}
\\[1cm] & =
1 + \underbrace{\Psi\pars{1/2} - \Psi\pars{1} \over 2}_{\ds{-\ln\pars{2}}}\
\,{1 \over n} +
\underbrace{\Psi\,'\pars{1/2} - \Psi\,'\pars{1} \over 4}
_{\ds{\pi^{2} \over 12}}\,{1 \over n^{2}} + \mrm{O}\pars{1 \over n^{3}}
\end{align}
$$
\bbx{\ds{%
\int_{0}^{1}{\dd x \over 1 + x^{n}} \sim
1 - {\ln\pars{2} \over n} + {\pi^{2}/12 \over n^{2}}
\quad\mbox{as}\ n \to \infty}}
$$</p>
<blockquote>
<p>$\ds{\Psi\ \mbox{and}\ \Psi\,'}$ values for arguments $\ds{1/2}$ and $1$ are given at the above mentioned link.</p>
</blockquote>
|
What is the point of a $1\times 1$ matrix? | 2,106,866 | 5 | https://math.stackexchange.com/questions/2106866/what-is-the-point-of-a-1-times-1-matrix | <p>Surely a $1\times 1$ matrix can only 'produce' vectors with 1 entry, and can take as input also only one entry vectors.</p>
<p>So, is there any use for $1\times 1$ matrices?</p>
<p>Since to me they do the same like scalars, only worse.</p>
| [
{
"answer_id": 2106873,
"score": 11,
"text": "<p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>\n"
},
{
"answer_id": 2106867,
"score": 4,
"text": "<p>Maybe the point of $1\\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\\times 1$ case.</p>\n"
},
{
"answer_id": 2106871,
"score": 3,
"text": "<p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>\n"
},
{
"answer_id": 2106952,
"score": 2,
"text": "<p>The point of a $n\\times n$ (or $m\\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>\n"
},
{
"answer_id": 2106873,
"score": 11,
"text": "<p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>\n"
},
{
"answer_id": 2106867,
"score": 4,
"text": "<p>Maybe the point of $1\\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\\times 1$ case.</p>\n"
},
{
"answer_id": 2106871,
"score": 3,
"text": "<p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>\n"
},
{
"answer_id": 2106952,
"score": 2,
"text": "<p>The point of a $n\\times n$ (or $m\\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>\n"
},
{
"answer_id": 2106873,
"score": 11,
"text": "<p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>\n"
},
{
"answer_id": 2106867,
"score": 4,
"text": "<p>Maybe the point of $1\\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\\times 1$ case.</p>\n"
},
{
"answer_id": 2106871,
"score": 3,
"text": "<p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>\n"
},
{
"answer_id": 2106952,
"score": 2,
"text": "<p>The point of a $n\\times n$ (or $m\\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>\n"
},
{
"answer_id": 2106873,
"score": 11,
"text": "<p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>\n"
},
{
"answer_id": 2106867,
"score": 4,
"text": "<p>Maybe the point of $1\\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\\times 1$ case.</p>\n"
},
{
"answer_id": 2106871,
"score": 3,
"text": "<p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>\n"
},
{
"answer_id": 2106952,
"score": 2,
"text": "<p>The point of a $n\\times n$ (or $m\\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>\n"
},
{
"answer_id": 2106873,
"score": 11,
"text": "<p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>\n"
},
{
"answer_id": 2106867,
"score": 4,
"text": "<p>Maybe the point of $1\\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\\times 1$ case.</p>\n"
},
{
"answer_id": 2106871,
"score": 3,
"text": "<p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>\n"
},
{
"answer_id": 2106952,
"score": 2,
"text": "<p>The point of a $n\\times n$ (or $m\\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>\n"
},
{
"answer_id": 2106873,
"score": 11,
"text": "<p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>\n"
},
{
"answer_id": 2106867,
"score": 4,
"text": "<p>Maybe the point of $1\\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\\times 1$ case.</p>\n"
},
{
"answer_id": 2106871,
"score": 3,
"text": "<p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>\n"
},
{
"answer_id": 2106952,
"score": 2,
"text": "<p>The point of a $n\\times n$ (or $m\\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>\n"
}
] | Question: What is the point of a $1\times 1$ matrix?
Answer 1: <p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>
Answer 2: <p>Maybe the point of $1\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\times 1$ case.</p>
Answer 3: <p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>
Answer 4: <p>The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>
Answer 5: <p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>
Answer 6: <p>Maybe the point of $1\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\times 1$ case.</p>
Answer 7: <p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>
Answer 8: <p>The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>
Answer 9: <p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>
Answer 10: <p>Maybe the point of $1\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\times 1$ case.</p>
Answer 11: <p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>
Answer 12: <p>The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>
Answer 13: <p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>
Answer 14: <p>Maybe the point of $1\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\times 1$ case.</p>
Answer 15: <p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>
Answer 16: <p>The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>
Answer 17: <p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>
Answer 18: <p>Maybe the point of $1\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\times 1$ case.</p>
Answer 19: <p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>
Answer 20: <p>The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>
Answer 21: <p>What is the use of the number zero? It doesn't do anything when you add it, and it always does the same thing when you multiply it. </p>
Answer 22: <p>Maybe the point of $1\times 1$ matrices is to not have to change the definition of matrix to exclude the $1\times 1$ case.</p>
Answer 23: <p>Maybe you come across a 1 dimension linear map in the wild and you want to make sure it's invertible and you've taken a linear algebra class and you don't want to draw or think about computing an inverse and you remember that the determinant of a 1 by 1 matrix is the scalar.</p>
Answer 24: <p>The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)</p>
|